Test Report: Docker_Linux_docker_arm64 21682

                    
                      7a7892355cfa060afe2cc9d2507b1d1308b66169:2025-10-02:41740
                    
                

Test fail (8/346)

x
+
TestAddons/serial/Volcano (211.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 78.696859ms
addons_test.go:868: volcano-scheduler stabilized in 80.045856ms
addons_test.go:876: volcano-admission stabilized in 80.068765ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-45dxm" [37dff980-6969-4428-b125-5087dd5dda75] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00317313s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-v68lw" [5967dfc4-0dd0-4e3d-badf-4e2428706d9d] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002969629s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-rsgcj" [ad899dd9-78c4-415c-a477-2bb27f279478] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002981721s
addons_test.go:903: (dbg) Run:  kubectl --context addons-991638 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-991638 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-991638 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [4205c620-276a-4caf-ae6c-f51d48e8bda3] Pending
helpers_test.go:352: "test-job-nginx-0" [4205c620-276a-4caf-ae6c-f51d48e8bda3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:935: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
addons_test.go:935: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-02 20:34:12.485935086 +0000 UTC m=+389.654112770
addons_test.go:935: (dbg) Run:  kubectl --context addons-991638 describe po test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) kubectl --context addons-991638 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             addons-991638/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:31:13 +0000
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-908e4f24-90c7-4a1b-882e-1647d80f50aa
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               10.244.0.27
IPs:
IP:           10.244.0.27
Controlled By:  Job/test-job
Containers:
nginx:
Container ID:  
Image:         nginx:latest
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
10m
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n25m5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n25m5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From     Message
----     ------     ----                 ----     -------
Normal   Scheduled  2m59s                volcano  Successfully assigned my-volcano/test-job-nginx-0 to addons-991638
Warning  Failed     94s                  kubelet  Failed to pull image "nginx:latest": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    12s (x5 over 2m59s)  kubelet  Pulling image "nginx:latest"
Warning  Failed     12s (x4 over 2m58s)  kubelet  Failed to pull image "nginx:latest": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12s (x5 over 2m58s)  kubelet  Error: ErrImagePull
Normal   BackOff    0s (x12 over 2m58s)  kubelet  Back-off pulling image "nginx:latest"
Warning  Failed     0s (x12 over 2m58s)  kubelet  Error: ImagePullBackOff
addons_test.go:935: (dbg) Run:  kubectl --context addons-991638 logs test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) Non-zero exit: kubectl --context addons-991638 logs test-job-nginx-0 -n my-volcano: exit status 1 (123.585408ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "test-job-nginx-0" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:935: kubectl --context addons-991638 logs test-job-nginx-0 -n my-volcano: exit status 1
addons_test.go:936: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-991638
helpers_test.go:243: (dbg) docker inspect addons-991638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	        "Created": "2025-10-02T20:28:36.164446632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 705058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:28:36.229753591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hosts",
	        "LogPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84-json.log",
	        "Name": "/addons-991638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-991638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-991638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	                "LowerDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-991638",
	                "Source": "/var/lib/docker/volumes/addons-991638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-991638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-991638",
	                "name.minikube.sigs.k8s.io": "addons-991638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "768c8a7310c370a43da0c26c5d036d5e7219705fa051b89897a391452ea6d9a6",
	            "SandboxKey": "/var/run/docker/netns/768c8a7310c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-991638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a0:60:40:27:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05f483610a0fe679b5a4ae4efa1318f553b88c9d264d6b136b55ee1eb76c3654",
	                    "EndpointID": "cbb01d4023b7a4128894d4e3144f6ccc9b60257273c0bfbde032cb7624cd4fb7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-991638",
	                        "ac51530cb591"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-991638 -n addons-991638
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 logs -n 25: (1.442214227s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ start   │ -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p download-docker-039409 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p download-docker-039409                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p binary-mirror-067581 --alsologtostderr --binary-mirror http://127.0.0.1:39571 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p binary-mirror-067581                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ addons  │ disable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ start   │ -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:28:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:28:10.231562  704660 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:10.231700  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.231711  704660 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:10.231716  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.232008  704660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:28:10.232510  704660 out.go:368] Setting JSON to false
	I1002 20:28:10.233399  704660 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11417,"bootTime":1759425473,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:28:10.233494  704660 start.go:140] virtualization:  
	I1002 20:28:10.236719  704660 out.go:179] * [addons-991638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:28:10.240328  704660 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:28:10.240425  704660 notify.go:220] Checking for updates...
	I1002 20:28:10.246179  704660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:28:10.249006  704660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:28:10.251947  704660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:28:10.255157  704660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:28:10.257883  704660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:28:10.260862  704660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:28:10.288692  704660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:28:10.288859  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.345302  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.335898449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.345417  704660 docker.go:318] overlay module found
	I1002 20:28:10.348598  704660 out.go:179] * Using the docker driver based on user configuration
	I1002 20:28:10.351429  704660 start.go:304] selected driver: docker
	I1002 20:28:10.351448  704660 start.go:924] validating driver "docker" against <nil>
	I1002 20:28:10.351462  704660 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:28:10.352198  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.405054  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.396474632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.405212  704660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:28:10.405467  704660 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:28:10.408345  704660 out.go:179] * Using Docker driver with root privileges
	I1002 20:28:10.411100  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:10.411184  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:10.411197  704660 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:28:10.411276  704660 start.go:348] cluster config:
	{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1002 20:28:10.414279  704660 out.go:179] * Starting "addons-991638" primary control-plane node in "addons-991638" cluster
	I1002 20:28:10.417120  704660 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:28:10.419910  704660 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:28:10.422725  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:10.422776  704660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:28:10.422791  704660 cache.go:58] Caching tarball of preloaded images
	I1002 20:28:10.422838  704660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:28:10.422873  704660 preload.go:233] Found /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:28:10.422902  704660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 20:28:10.423255  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:10.423397  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json: {Name:mk2f26d255d9ea8bd15922b678de4d5774eef391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:10.438348  704660 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:28:10.438495  704660 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:28:10.438518  704660 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:28:10.438524  704660 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:28:10.438532  704660 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:28:10.438537  704660 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:28:28.104678  704660 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:28:28.104717  704660 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:28:28.104748  704660 start.go:360] acquireMachinesLock for addons-991638: {Name:mk53aeb56b1e9fb052ee11df133ba143769d5932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:28:28.104882  704660 start.go:364] duration metric: took 113.831µs to acquireMachinesLock for "addons-991638"
	I1002 20:28:28.104912  704660 start.go:93] Provisioning new machine with config: &{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:28:28.104985  704660 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:28:28.108517  704660 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:28:28.108807  704660 start.go:159] libmachine.API.Create for "addons-991638" (driver="docker")
	I1002 20:28:28.108861  704660 client.go:168] LocalClient.Create starting
	I1002 20:28:28.108989  704660 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem
	I1002 20:28:28.920995  704660 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem
	I1002 20:28:29.719304  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:28:29.735220  704660 cli_runner.go:211] docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:28:29.735320  704660 network_create.go:284] running [docker network inspect addons-991638] to gather additional debugging logs...
	I1002 20:28:29.735342  704660 cli_runner.go:164] Run: docker network inspect addons-991638
	W1002 20:28:29.756033  704660 cli_runner.go:211] docker network inspect addons-991638 returned with exit code 1
	I1002 20:28:29.756065  704660 network_create.go:287] error running [docker network inspect addons-991638]: docker network inspect addons-991638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-991638 not found
	I1002 20:28:29.756079  704660 network_create.go:289] output of [docker network inspect addons-991638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-991638 not found
	
	** /stderr **
	I1002 20:28:29.756173  704660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:29.772458  704660 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d5e320}
	I1002 20:28:29.772498  704660 network_create.go:124] attempt to create docker network addons-991638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:28:29.772554  704660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-991638 addons-991638
	I1002 20:28:29.829752  704660 network_create.go:108] docker network addons-991638 192.168.49.0/24 created
	I1002 20:28:29.829781  704660 kic.go:121] calculated static IP "192.168.49.2" for the "addons-991638" container
	I1002 20:28:29.829879  704660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:28:29.847391  704660 cli_runner.go:164] Run: docker volume create addons-991638 --label name.minikube.sigs.k8s.io=addons-991638 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:28:29.864875  704660 oci.go:103] Successfully created a docker volume addons-991638
	I1002 20:28:29.864995  704660 cli_runner.go:164] Run: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:28:32.119965  704660 cli_runner.go:217] Completed: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.254927204s)
	I1002 20:28:32.120005  704660 oci.go:107] Successfully prepared a docker volume addons-991638
	I1002 20:28:32.120024  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:32.120045  704660 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:28:32.120115  704660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:28:36.088209  704660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.968050647s)
	I1002 20:28:36.088240  704660 kic.go:203] duration metric: took 3.968193754s to extract preloaded images to volume ...
	W1002 20:28:36.088386  704660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:28:36.088487  704660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:28:36.149550  704660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-991638 --name addons-991638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-991638 --network addons-991638 --ip 192.168.49.2 --volume addons-991638:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:28:36.432531  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Running}}
	I1002 20:28:36.459147  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:36.484423  704660 cli_runner.go:164] Run: docker exec addons-991638 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:28:36.539034  704660 oci.go:144] the created container "addons-991638" has a running status.
	I1002 20:28:36.539068  704660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa...
	I1002 20:28:37.262683  704660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:28:37.288911  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.309985  704660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:28:37.310010  704660 kic_runner.go:114] Args: [docker exec --privileged addons-991638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:28:37.369831  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.391035  704660 machine.go:93] provisionDockerMachine start ...
	I1002 20:28:37.391126  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.411223  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.411540  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.411549  704660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:28:37.553086  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.553108  704660 ubuntu.go:182] provisioning hostname "addons-991638"
	I1002 20:28:37.553169  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.575369  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.575674  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.575686  704660 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-991638 && echo "addons-991638" | sudo tee /etc/hostname
	I1002 20:28:37.721568  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.721652  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.747484  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.747789  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.747811  704660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-991638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-991638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-991638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:28:37.877526  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:28:37.877550  704660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-702037/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-702037/.minikube}
	I1002 20:28:37.877573  704660 ubuntu.go:190] setting up certificates
	I1002 20:28:37.877582  704660 provision.go:84] configureAuth start
	I1002 20:28:37.877644  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:37.894231  704660 provision.go:143] copyHostCerts
	I1002 20:28:37.894324  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem (1078 bytes)
	I1002 20:28:37.894448  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem (1123 bytes)
	I1002 20:28:37.894507  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem (1675 bytes)
	I1002 20:28:37.894559  704660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem org=jenkins.addons-991638 san=[127.0.0.1 192.168.49.2 addons-991638 localhost minikube]
	I1002 20:28:38.951532  704660 provision.go:177] copyRemoteCerts
	I1002 20:28:38.951598  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:28:38.951639  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:38.968871  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:39.069322  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:28:39.087473  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:28:39.106442  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:28:39.125193  704660 provision.go:87] duration metric: took 1.247587619s to configureAuth
	I1002 20:28:39.125222  704660 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:28:39.125407  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:28:39.125491  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.145970  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.146282  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.146299  704660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 20:28:39.282106  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 20:28:39.282131  704660 ubuntu.go:71] root file system type: overlay
	I1002 20:28:39.282235  704660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 20:28:39.282310  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.300258  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.300556  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.300651  704660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 20:28:39.442933  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 20:28:39.443023  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.460361  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.460680  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.460703  704660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 20:28:40.382609  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-02 20:28:39.437593143 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 20:28:40.382680  704660 machine.go:96] duration metric: took 2.991625077s to provisionDockerMachine
	I1002 20:28:40.382776  704660 client.go:171] duration metric: took 12.273900895s to LocalClient.Create
	I1002 20:28:40.382819  704660 start.go:167] duration metric: took 12.27401677s to libmachine.API.Create "addons-991638"
	I1002 20:28:40.382841  704660 start.go:293] postStartSetup for "addons-991638" (driver="docker")
	I1002 20:28:40.382863  704660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:28:40.382961  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:28:40.383028  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.400184  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.497649  704660 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:28:40.501057  704660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:28:40.501087  704660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:28:40.501099  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/addons for local assets ...
	I1002 20:28:40.501170  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/files for local assets ...
	I1002 20:28:40.501198  704660 start.go:296] duration metric: took 118.339458ms for postStartSetup
	I1002 20:28:40.501542  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.519025  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:40.519322  704660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:28:40.519374  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.535401  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.626314  704660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:28:40.631258  704660 start.go:128] duration metric: took 12.526256292s to createHost
	I1002 20:28:40.631280  704660 start.go:83] releasing machines lock for "addons-991638", held for 12.526385541s
	I1002 20:28:40.631365  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.648027  704660 ssh_runner.go:195] Run: cat /version.json
	I1002 20:28:40.648051  704660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:28:40.648079  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.648112  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.671874  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.672768  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.765471  704660 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:40.858838  704660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:28:40.863487  704660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:28:40.863561  704660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:28:40.891689  704660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:28:40.891716  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:40.891748  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:40.891847  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:40.905197  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:28:40.914585  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:28:40.923483  704660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:28:40.923613  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:28:40.932751  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.941795  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:28:40.950514  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.959583  704660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:28:40.967941  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:28:40.976883  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:28:40.986149  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:28:40.995305  704660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:28:41.004003  704660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:28:41.012739  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.128237  704660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:28:41.231332  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:41.231381  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:41.231441  704660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 20:28:41.246943  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.259982  704660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:28:41.299529  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.312040  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:28:41.325475  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:41.339679  704660 ssh_runner.go:195] Run: which cri-dockerd
	I1002 20:28:41.343375  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 20:28:41.351275  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 20:28:41.364332  704660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 20:28:41.484463  704660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 20:28:41.601245  704660 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 20:28:41.601360  704660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 20:28:41.614352  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 20:28:41.626868  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.733314  704660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 20:28:42.111293  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:28:42.128509  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 20:28:42.145965  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.163934  704660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 20:28:42.308063  704660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 20:28:42.433113  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.552919  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 20:28:42.569022  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 20:28:42.582319  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.699949  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 20:28:42.769589  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.783022  704660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 20:28:42.783145  704660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 20:28:42.787107  704660 start.go:563] Will wait 60s for crictl version
	I1002 20:28:42.787194  704660 ssh_runner.go:195] Run: which crictl
	I1002 20:28:42.790829  704660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:28:42.815945  704660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 20:28:42.816103  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.842953  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.874688  704660 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 20:28:42.874787  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:42.890887  704660 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:28:42.895320  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:42.906278  704660 kubeadm.go:883] updating cluster {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:28:42.906402  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:42.906467  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.925708  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.925733  704660 docker.go:621] Images already preloaded, skipping extraction
	I1002 20:28:42.925801  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.945361  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.945383  704660 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:28:42.945393  704660 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1002 20:28:42.945504  704660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-991638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:28:42.945582  704660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 20:28:42.996799  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:42.996828  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:42.996844  704660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:28:42.996865  704660 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-991638 NodeName:addons-991638 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:28:42.996983  704660 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-991638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:28:42.997055  704660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:28:43.006552  704660 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:28:43.006645  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:28:43.015646  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1002 20:28:43.030545  704660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:28:43.044123  704660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 20:28:43.057931  704660 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:28:43.061696  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:43.072014  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:43.187259  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:28:43.203829  704660 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638 for IP: 192.168.49.2
	I1002 20:28:43.203899  704660 certs.go:195] generating shared ca certs ...
	I1002 20:28:43.203929  704660 certs.go:227] acquiring lock for ca certs: {Name:mk80feb87d46a3c61de00b383dd8ac7fd2e19095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:43.204734  704660 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key
	I1002 20:28:44.637131  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt ...
	I1002 20:28:44.637163  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt: {Name:mkb6d8319d3a74d42b081683e314c37e53586717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637366  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key ...
	I1002 20:28:44.637379  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key: {Name:mkbd44d267c3b1cf1fed0a906ac7bf46743d8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637481  704660 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key
	I1002 20:28:45.683223  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt ...
	I1002 20:28:45.683262  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt: {Name:mkf2892474e0dfa857be215b552060af628196ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683490  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key ...
	I1002 20:28:45.683507  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key: {Name:mkb3e427bf0a6e7ceb613b926e3c90e07409da52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683588  704660 certs.go:257] generating profile certs ...
	I1002 20:28:45.683654  704660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key
	I1002 20:28:45.683671  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt with IP's: []
	I1002 20:28:46.046463  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt ...
	I1002 20:28:46.046497  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: {Name:mk51f9d8abe3f7006e638458dae2df70cdaa936a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046676  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key ...
	I1002 20:28:46.046691  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key: {Name:mke5acc604e8c4ff883546df37d116f9c766e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046773  704660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b
	I1002 20:28:46.046795  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:28:46.569113  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b ...
	I1002 20:28:46.569145  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b: {Name:mk40a7d58b6523a132d065d0266597e722b3762d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.569955  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b ...
	I1002 20:28:46.569974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b: {Name:mkbe601cfd4f3105ca705f6de8b8f9d490a11ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.570609  704660 certs.go:382] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt
	I1002 20:28:46.570694  704660 certs.go:386] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key
	I1002 20:28:46.570747  704660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key
	I1002 20:28:46.570767  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt with IP's: []
	I1002 20:28:46.754716  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt ...
	I1002 20:28:46.754747  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt: {Name:mkd0f46ec8109fe64dda020f7c270bd3d9dd6bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.754958  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key ...
	I1002 20:28:46.754974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key: {Name:mk7b62b96428d619ab88e3c0c6972f37ee378b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.755195  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:28:46.755238  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:28:46.755269  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:28:46.755294  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem (1675 bytes)
	I1002 20:28:46.755827  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:28:46.773406  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:28:46.790954  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:28:46.807835  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:28:46.825141  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:28:46.842372  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:28:46.860238  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:28:46.877776  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:28:46.894424  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:28:46.911754  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:28:46.925117  704660 ssh_runner.go:195] Run: openssl version
	I1002 20:28:46.931161  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:28:46.940887  704660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945128  704660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945198  704660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.986089  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:28:46.995228  704660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:28:46.998614  704660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:28:46.998670  704660 kubeadm.go:400] StartCluster: {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:28:46.998801  704660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:28:47.017260  704660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:28:47.024934  704660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:28:47.032572  704660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:28:47.032637  704660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:28:47.040541  704660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:28:47.040563  704660 kubeadm.go:157] found existing configuration files:
	
	I1002 20:28:47.040632  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:28:47.048232  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:28:47.048324  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:28:47.055897  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:28:47.063851  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:28:47.063972  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:28:47.071920  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.079791  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:28:47.079884  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.087482  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:28:47.095260  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:28:47.095325  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:28:47.102743  704660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:28:47.143961  704660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:28:47.144023  704660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:28:47.171162  704660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:28:47.171292  704660 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:28:47.171362  704660 kubeadm.go:318] OS: Linux
	I1002 20:28:47.171451  704660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:28:47.171534  704660 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:28:47.171621  704660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:28:47.171707  704660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:28:47.171790  704660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:28:47.171876  704660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:28:47.171956  704660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:28:47.172038  704660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:28:47.172128  704660 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:28:47.235837  704660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:28:47.235957  704660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:28:47.236052  704660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:28:47.257841  704660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:28:47.262676  704660 out.go:252]   - Generating certificates and keys ...
	I1002 20:28:47.262771  704660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:28:47.262845  704660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:28:47.756271  704660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:28:48.584093  704660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:28:48.888267  704660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:28:49.699713  704660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:28:50.057163  704660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:28:50.057649  704660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.779363  704660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:28:50.779734  704660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.900170  704660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:28:51.497655  704660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:28:51.954519  704660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:28:51.954818  704660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:28:53.080191  704660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:28:53.266970  704660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:28:53.973649  704660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:28:54.725487  704660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:28:55.109834  704660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:28:55.110186  704660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:28:55.113467  704660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:28:55.117318  704660 out.go:252]   - Booting up control plane ...
	I1002 20:28:55.117435  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:28:55.117518  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:28:55.118060  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:28:55.141929  704660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:28:55.142323  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:28:55.150629  704660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:28:55.150957  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:28:55.151008  704660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:28:55.286296  704660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:28:55.286428  704660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:28:56.789783  704660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501225822s
	I1002 20:28:56.789937  704660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:28:56.790047  704660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:28:56.790165  704660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:28:56.790264  704660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:28:58.802179  704660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.012574504s
	I1002 20:29:00.806811  704660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.017417752s
	I1002 20:29:02.791474  704660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002021418s
	I1002 20:29:02.814104  704660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:29:02.827699  704660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:29:02.846247  704660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:29:02.846862  704660 kubeadm.go:318] [mark-control-plane] Marking the node addons-991638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:29:02.861722  704660 kubeadm.go:318] [bootstrap-token] Using token: z0jdd4.ysfi1vhms678tv6t
	I1002 20:29:02.864796  704660 out.go:252]   - Configuring RBAC rules ...
	I1002 20:29:02.864929  704660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:29:02.869885  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:29:02.888805  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:29:02.892893  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:29:02.897307  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:29:02.902794  704660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:29:03.198711  704660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:29:03.626604  704660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:29:04.197660  704660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:29:04.199081  704660 kubeadm.go:318] 
	I1002 20:29:04.199168  704660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:29:04.199174  704660 kubeadm.go:318] 
	I1002 20:29:04.199283  704660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:29:04.199304  704660 kubeadm.go:318] 
	I1002 20:29:04.199332  704660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:29:04.199403  704660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:29:04.199462  704660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:29:04.199470  704660 kubeadm.go:318] 
	I1002 20:29:04.199544  704660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:29:04.199559  704660 kubeadm.go:318] 
	I1002 20:29:04.199633  704660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:29:04.199648  704660 kubeadm.go:318] 
	I1002 20:29:04.199708  704660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:29:04.199805  704660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:29:04.199891  704660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:29:04.199904  704660 kubeadm.go:318] 
	I1002 20:29:04.199999  704660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:29:04.200089  704660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:29:04.200099  704660 kubeadm.go:318] 
	I1002 20:29:04.200207  704660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200351  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 \
	I1002 20:29:04.200382  704660 kubeadm.go:318] 	--control-plane 
	I1002 20:29:04.200390  704660 kubeadm.go:318] 
	I1002 20:29:04.200503  704660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:29:04.200516  704660 kubeadm.go:318] 
	I1002 20:29:04.200612  704660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200736  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 
	I1002 20:29:04.203776  704660 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:29:04.204016  704660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:29:04.204131  704660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:29:04.204150  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:29:04.204164  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:29:04.207498  704660 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:29:04.210410  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:29:04.217868  704660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:29:04.235604  704660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:29:04.235701  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.235739  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-991638 minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-991638 minikube.k8s.io/primary=true
	I1002 20:29:04.254399  704660 ops.go:34] apiserver oom_adj: -16
	I1002 20:29:04.369134  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.869740  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.370081  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.870196  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.369731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.870115  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.369228  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.869851  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.369279  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.869731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.972720  704660 kubeadm.go:1113] duration metric: took 4.737085496s to wait for elevateKubeSystemPrivileges
	I1002 20:29:08.972751  704660 kubeadm.go:402] duration metric: took 21.974085235s to StartCluster
	I1002 20:29:08.972769  704660 settings.go:142] acquiring lock: {Name:mk05279472feb5063a5c2285eba6fd6d59490060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.972884  704660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:29:08.973255  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/kubeconfig: {Name:mk451cd073bc3a44904ff8d0351225145e56e5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.973483  704660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:29:08.973596  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:29:08.973840  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.973881  704660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:29:08.973962  704660 addons.go:69] Setting yakd=true in profile "addons-991638"
	I1002 20:29:08.973977  704660 addons.go:238] Setting addon yakd=true in "addons-991638"
	I1002 20:29:08.973998  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.974491  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.974944  704660 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-991638"
	I1002 20:29:08.974969  704660 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-991638"
	I1002 20:29:08.974993  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.975410  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.975798  704660 addons.go:69] Setting cloud-spanner=true in profile "addons-991638"
	I1002 20:29:08.975820  704660 addons.go:238] Setting addon cloud-spanner=true in "addons-991638"
	I1002 20:29:08.975844  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.976228  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.978568  704660 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-991638"
	I1002 20:29:08.978639  704660 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:08.978669  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.979258  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.980070  704660 out.go:179] * Verifying Kubernetes components...
	I1002 20:29:08.980299  704660 addons.go:69] Setting registry-creds=true in profile "addons-991638"
	I1002 20:29:08.980320  704660 addons.go:238] Setting addon registry-creds=true in "addons-991638"
	I1002 20:29:08.980348  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.980878  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984024  704660 addons.go:69] Setting storage-provisioner=true in profile "addons-991638"
	I1002 20:29:08.984111  704660 addons.go:238] Setting addon storage-provisioner=true in "addons-991638"
	I1002 20:29:08.985311  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984905  704660 addons.go:69] Setting default-storageclass=true in profile "addons-991638"
	I1002 20:29:08.986095  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-991638"
	I1002 20:29:08.986385  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.997940  704660 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-991638"
	I1002 20:29:08.997997  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-991638"
	I1002 20:29:08.998330  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984914  704660 addons.go:69] Setting gcp-auth=true in profile "addons-991638"
	I1002 20:29:08.998967  704660 mustload.go:65] Loading cluster: addons-991638
	I1002 20:29:08.999148  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.999394  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984921  704660 addons.go:69] Setting ingress=true in profile "addons-991638"
	I1002 20:29:09.012451  704660 addons.go:238] Setting addon ingress=true in "addons-991638"
	I1002 20:29:09.012506  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.012981  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.017454  704660 addons.go:69] Setting volcano=true in profile "addons-991638"
	I1002 20:29:09.017490  704660 addons.go:238] Setting addon volcano=true in "addons-991638"
	I1002 20:29:09.017527  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.018061  704660 addons.go:69] Setting volumesnapshots=true in profile "addons-991638"
	I1002 20:29:09.018133  704660 addons.go:238] Setting addon volumesnapshots=true in "addons-991638"
	I1002 20:29:09.018173  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984925  704660 addons.go:69] Setting ingress-dns=true in profile "addons-991638"
	I1002 20:29:09.025533  704660 addons.go:238] Setting addon ingress-dns=true in "addons-991638"
	I1002 20:29:09.025587  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.026063  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.044490  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984928  704660 addons.go:69] Setting inspektor-gadget=true in profile "addons-991638"
	I1002 20:29:09.049039  704660 addons.go:238] Setting addon inspektor-gadget=true in "addons-991638"
	I1002 20:29:09.049079  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.049563  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984931  704660 addons.go:69] Setting metrics-server=true in profile "addons-991638"
	I1002 20:29:09.074105  704660 addons.go:238] Setting addon metrics-server=true in "addons-991638"
	I1002 20:29:09.074149  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.075253  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984945  704660 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-991638"
	I1002 20:29:09.101041  704660 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-991638"
	I1002 20:29:09.101085  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.101634  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.134221  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984949  704660 addons.go:69] Setting registry=true in profile "addons-991638"
	I1002 20:29:09.134685  704660 addons.go:238] Setting addon registry=true in "addons-991638"
	I1002 20:29:09.134721  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.135150  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.166068  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.985251  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:29:09.210573  704660 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:29:09.222512  704660 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:29:09.228645  704660 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:09.228697  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:29:09.228802  704660 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:09.228834  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:29:09.228917  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.232353  704660 addons.go:238] Setting addon default-storageclass=true in "addons-991638"
	I1002 20:29:09.232403  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.232836  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.240129  704660 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:29:09.228818  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252033  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:29:09.281457  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.289194  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.276652  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:29:09.291469  704660 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:29:09.291547  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252086  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317140  704660 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-991638"
	I1002 20:29:09.317269  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317905  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.321130  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:29:09.324328  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:29:09.329618  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:29:09.329846  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:09.329862  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:29:09.329924  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.330072  704660 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:29:09.332483  704660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:09.332506  704660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:29:09.332556  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.352512  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.359187  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:29:09.364275  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:29:09.364559  704660 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:09.364575  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:29:09.364638  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.375690  704660 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 20:29:09.375940  704660 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:29:09.386355  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:09.386396  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:29:09.386476  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.402265  704660 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 20:29:09.412773  704660 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 20:29:09.418587  704660 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:09.418666  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 20:29:09.418775  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.419320  704660 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:29:09.423729  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:29:09.423757  704660 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:29:09.423846  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.441567  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:29:09.442010  704660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:29:09.447860  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:29:09.451279  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:29:09.453459  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:09.453480  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:29:09.453561  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.455757  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:29:09.455822  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:29:09.455914  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.465113  704660 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:29:09.469477  704660 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:29:09.469509  704660 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:29:09.469576  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.479455  704660 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 20:29:09.482830  704660 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:29:09.487219  704660 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:29:09.487285  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:29:09.487386  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.498491  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.506413  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:29:09.509491  704660 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:29:09.509670  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:29:09.509687  704660 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:29:09.509759  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.515326  704660 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:09.515349  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:29:09.515413  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.556794  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.592629  704660 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:29:09.595721  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.601773  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.604845  704660 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:29:09.607957  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:09.607982  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:29:09.608078  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.639621  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.660885  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.690935  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.696294  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.717153  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.743500  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.746463  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.751738  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.757583  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.764350  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.764394  704660 retry.go:31] will retry after 315.573784ms: ssh: handshake failed: EOF
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.784428  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.784456  704660 retry.go:31] will retry after 304.179518ms: ssh: handshake failed: EOF
	I1002 20:29:09.898194  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:29:09.936055  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:29:10.111040  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:10.111126  704660 retry.go:31] will retry after 465.641139ms: ssh: handshake failed: EOF
	I1002 20:29:10.668679  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:29:10.668702  704660 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:29:10.797217  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:29:10.797297  704660 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:29:10.865274  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:10.881693  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:29:10.881716  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:29:10.886079  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:10.921408  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:10.943803  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:29:10.943828  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:29:10.978775  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:29:10.978805  704660 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:29:10.994840  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:11.011037  704660 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:29:11.011073  704660 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:29:11.030493  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:11.032022  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:29:11.032044  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:29:11.035800  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:29:11.035830  704660 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:29:11.071721  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:11.091723  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:11.106681  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:11.145109  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:29:11.145139  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:29:11.148280  704660 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.148309  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:29:11.202167  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.202196  704660 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:29:11.305203  704660 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.305232  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:29:11.316393  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:11.329281  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:29:11.329312  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:29:11.355129  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:11.398833  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.398857  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:29:11.409753  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:29:11.409781  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:29:11.426941  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.428747  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.489773  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:29:11.489841  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:29:11.494567  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.542853  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.615125  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:29:11.615198  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:29:11.677959  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:29:11.678040  704660 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:29:11.863554  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:29:11.863639  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:29:12.043926  704660 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.044010  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:29:12.200094  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:29:12.200165  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:29:12.470826  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.509295  704660 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.573157378s)
	I1002 20:29:12.509455  704660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.611238205s)
	I1002 20:29:12.509528  704660 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:29:12.511038  704660 node_ready.go:35] waiting up to 6m0s for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515289  704660 node_ready.go:49] node "addons-991638" is "Ready"
	I1002 20:29:12.515313  704660 node_ready.go:38] duration metric: took 3.935549ms for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515328  704660 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:29:12.515389  704660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:29:12.613485  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:29:12.613555  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:29:12.794628  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.92930886s)
	I1002 20:29:13.024378  704660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-991638" context rescaled to 1 replicas
	I1002 20:29:13.094487  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:29:13.094553  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:29:13.666276  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:29:13.666353  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:29:14.220703  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:14.220782  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:29:14.633137  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:16.743396  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:29:16.743479  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:16.772705  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:17.648047  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:29:17.758402  704660 addons.go:238] Setting addon gcp-auth=true in "addons-991638"
	I1002 20:29:17.758451  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:17.758915  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:17.782244  704660 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:29:17.782296  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:17.815647  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:19.091966  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.205841491s)
	I1002 20:29:19.092058  704660 addons.go:479] Verifying addon ingress=true in "addons-991638"
	I1002 20:29:19.092330  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.170806627s)
	I1002 20:29:19.092745  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.097877392s)
	I1002 20:29:19.092800  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.06227576s)
	I1002 20:29:19.095718  704660 out.go:179] * Verifying ingress addon...
	I1002 20:29:19.099717  704660 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:29:19.283832  704660 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:29:19.283853  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:19.648674  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.108386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.606825  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.102257  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.030489478s)
	I1002 20:29:21.102331  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.01058393s)
	I1002 20:29:21.102523  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995812674s)
	I1002 20:29:21.102576  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.786160691s)
	I1002 20:29:21.102665  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.747515739s)
	I1002 20:29:21.102736  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.675772832s)
	W1002 20:29:21.102757  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102773  704660 retry.go:31] will retry after 165.427061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102843  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.674073931s)
	I1002 20:29:21.102857  704660 addons.go:479] Verifying addon metrics-server=true in "addons-991638"
	I1002 20:29:21.102896  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.608257689s)
	I1002 20:29:21.102908  704660 addons.go:479] Verifying addon registry=true in "addons-991638"
	I1002 20:29:21.103092  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.560138876s)
	I1002 20:29:21.103416  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.632501338s)
	W1002 20:29:21.103659  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.103480  704660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.588080107s)
	I1002 20:29:21.103716  704660 api_server.go:72] duration metric: took 12.130202438s to wait for apiserver process to appear ...
	I1002 20:29:21.103723  704660 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:29:21.103737  704660 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:29:21.104569  704660 retry.go:31] will retry after 131.465799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.106517  704660 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-991638 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:29:21.106623  704660 out.go:179] * Verifying registry addon...
	I1002 20:29:21.110687  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:29:21.128889  704660 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:29:21.146707  704660 api_server.go:141] control plane version: v1.34.1
	I1002 20:29:21.146750  704660 api_server.go:131] duration metric: took 43.020902ms to wait for apiserver health ...
	I1002 20:29:21.146760  704660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:29:21.231778  704660 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:29:21.231803  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.232570  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.232990  704660 system_pods.go:59] 16 kube-system pods found
	I1002 20:29:21.233027  704660 system_pods.go:61] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233037  704660 system_pods.go:61] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233049  704660 system_pods.go:61] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.233054  704660 system_pods.go:61] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.233058  704660 system_pods.go:61] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.233072  704660 system_pods.go:61] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.233077  704660 system_pods.go:61] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.233082  704660 system_pods.go:61] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.233093  704660 system_pods.go:61] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.233100  704660 system_pods.go:61] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.233110  704660 system_pods.go:61] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.233117  704660 system_pods.go:61] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.233126  704660 system_pods.go:61] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.233138  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233145  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233152  704660 system_pods.go:61] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.233159  704660 system_pods.go:74] duration metric: took 86.393348ms to wait for pod list to return data ...
	I1002 20:29:21.233171  704660 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:29:21.236551  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:21.269271  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:29:21.290207  704660 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:29:21.375005  704660 default_sa.go:45] found service account: "default"
	I1002 20:29:21.375031  704660 default_sa.go:55] duration metric: took 141.854284ms for default service account to be created ...
	I1002 20:29:21.375042  704660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:29:21.403678  704660 system_pods.go:86] 17 kube-system pods found
	I1002 20:29:21.403714  704660 system_pods.go:89] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403724  704660 system_pods.go:89] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403730  704660 system_pods.go:89] "csi-hostpath-attacher-0" [e1b49a9e-cc2c-43ad-a104-7517ae3b9b71] Pending
	I1002 20:29:21.403736  704660 system_pods.go:89] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.403740  704660 system_pods.go:89] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.403744  704660 system_pods.go:89] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.403751  704660 system_pods.go:89] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.403755  704660 system_pods.go:89] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.403760  704660 system_pods.go:89] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.403767  704660 system_pods.go:89] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.403774  704660 system_pods.go:89] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.403789  704660 system_pods.go:89] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.403795  704660 system_pods.go:89] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.403857  704660 system_pods.go:89] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.403871  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403878  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403881  704660 system_pods.go:89] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.403889  704660 system_pods.go:126] duration metric: took 28.840694ms to wait for k8s-apps to be running ...
	I1002 20:29:21.403905  704660 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:29:21.403962  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:29:21.633145  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.633273  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.719440  704660 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.937165373s)
	I1002 20:29:21.723512  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:21.737044  704660 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:29:21.739233  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.105985614s)
	I1002 20:29:21.739269  704660 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:21.741380  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:29:21.741407  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:29:21.741519  704660 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:29:21.746220  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:29:21.749098  704660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:29:21.749124  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:21.885645  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:29:21.885723  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:29:21.999241  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:21.999306  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:29:22.103650  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.107641  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:22.115675  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.249646  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:22.603835  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.614145  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.750221  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.104878  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.113990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.250841  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.614664  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.616397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.754661  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.028432  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.791823015s)
	I1002 20:29:24.104308  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.114739  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.250667  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.302476  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.194753737s)
	I1002 20:29:24.302845  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.033536467s)
	W1002 20:29:24.302913  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302985  704660 retry.go:31] will retry after 309.54405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302944  704660 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.89841849s)
	I1002 20:29:24.303063  704660 system_svc.go:56] duration metric: took 2.899157354s WaitForService to wait for kubelet
	I1002 20:29:24.303086  704660 kubeadm.go:586] duration metric: took 15.329570576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:29:24.303134  704660 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:29:24.305338  704660 addons.go:479] Verifying addon gcp-auth=true in "addons-991638"
	I1002 20:29:24.308194  704660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:29:24.308224  704660 node_conditions.go:123] node cpu capacity is 2
	I1002 20:29:24.308238  704660 node_conditions.go:105] duration metric: took 5.087392ms to run NodePressure ...
	I1002 20:29:24.308251  704660 start.go:241] waiting for startup goroutines ...
	I1002 20:29:24.310445  704660 out.go:179] * Verifying gcp-auth addon...
	I1002 20:29:24.313602  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:29:24.325918  704660 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:29:24.325990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:24.603413  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.613652  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:24.613983  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.750444  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.817604  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.103685  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.118065  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.249976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.317010  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.603841  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.613949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.750092  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.817987  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.957381  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.343690162s)
	W1002 20:29:25.957546  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:25.957590  704660 retry.go:31] will retry after 334.218122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:26.104386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.114584  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.250032  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.292352  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:26.317525  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:26.604047  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.613938  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.750249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.817111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.103343  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.113575  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.250109  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.317078  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.444622  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.152189827s)
	W1002 20:29:27.444714  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.444752  704660 retry.go:31] will retry after 546.51266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.604261  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.614167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.749521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.817914  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.992173  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:28.104304  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.114156  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.249193  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.317122  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:28.603290  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.614437  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.749750  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.817014  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:29:28.983712  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:28.983784  704660 retry.go:31] will retry after 1.260023447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:29.103350  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.114454  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.249644  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.317067  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:29.602986  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.613726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.749688  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.816730  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.103822  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.114057  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.244571  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:30.250615  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.603619  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.614026  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.749853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.816479  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.103600  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.114190  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.249506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.298691  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.054084159s)
	W1002 20:29:31.298721  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.298741  704660 retry.go:31] will retry after 1.646308182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.316219  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.605040  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.631189  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.750015  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.817796  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.103881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.116470  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.250021  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.317307  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.604391  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.614775  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.750540  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.816630  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.946032  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:33.104871  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.115283  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.250183  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.317668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:33.603187  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.614529  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.749647  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.817102  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.018177  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.072106262s)
	W1002 20:29:34.018217  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.018266  704660 retry.go:31] will retry after 2.385257575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.104529  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.114836  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.250452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.318843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.603645  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.614617  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.750082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.817533  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.107703  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.114893  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.251718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.603848  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.613657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.750110  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.816940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.103942  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.113970  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.250099  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.316846  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.404147  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:36.604239  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.613891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.750685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.818255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.103487  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.114495  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.250302  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.316913  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.595720  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.191535427s)
	W1002 20:29:37.595768  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.595789  704660 retry.go:31] will retry after 3.1319796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.604699  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.613531  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.750080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.820120  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.135110  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.135518  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.251304  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.317891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.603678  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.614208  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.750230  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.817842  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.110039  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.123577  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.253100  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.320981  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.606978  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.619008  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.757188  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.821029  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.104171  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.114472  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.250599  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.316853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.603622  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.614494  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.728573  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:40.750499  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.817269  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.103718  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.113793  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.251438  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.323113  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.606477  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.615889  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.749940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.819471  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.104623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.115622  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.203580  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.474960878s)
	W1002 20:29:42.203682  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.203776  704660 retry.go:31] will retry after 7.48710054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.250824  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.317605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.603374  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.614191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.750400  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.816718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.103173  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.114483  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.249820  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.317639  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.603139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.614668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.750509  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.817740  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.103982  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.113850  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.250679  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.604766  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.615339  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.749664  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.817244  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.105520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.115165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.323737  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.603415  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.614694  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.750384  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.817336  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.104015  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.113900  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.250650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.316397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.603826  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.613857  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.750135  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.817184  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.103139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.114040  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.250197  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.316961  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.603106  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.613879  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.753191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.816593  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.104633  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.114511  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.249966  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.317031  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.603266  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.614360  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.750158  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.817128  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.103974  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.113579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.250363  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.317726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.603262  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.614568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.691764  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:49.753093  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.818136  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.106234  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.117011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.250613  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.317535  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.605091  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.615017  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.751316  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.817578  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.107737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.116527  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.251344  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.319605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.408757  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.716938043s)
	W1002 20:29:51.408854  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.408899  704660 retry.go:31] will retry after 12.661372424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.603144  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.614399  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.750042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.817211  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.104464  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.115011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.250151  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.316858  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.603659  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.614216  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.751315  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.817053  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.104565  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.113559  704660 kapi.go:107] duration metric: took 32.002874096s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:29:53.250114  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.317821  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.603164  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.750146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.820167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.106776  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.316832  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.603001  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.750421  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.817545  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.103737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.250894  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.316949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.603085  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.750103  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.816937  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.103610  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.250374  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.351350  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.603669  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.750222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.816995  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.103711  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.250016  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.317173  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.603412  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.749585  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.817087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.106858  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.250249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.317416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.602677  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.751843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.816975  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.104520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.250328  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.316837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.603027  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.750542  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.817568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.118971  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.260853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.324376  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.603347  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.751070  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.817027  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.116318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.249998  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.318228  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.604526  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.750944  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.818452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.104307  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.254223  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.318397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.604952  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.750890  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.817295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.106126  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.254295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.317579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.603623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.755126  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.818458  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.070964  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:04.103003  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.251061  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.317116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.604016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.750159  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.819498  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.103756  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.249080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.603780  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.751506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.820087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.861050  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790044781s)
	W1002 20:30:05.861139  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:05.861176  704660 retry.go:31] will retry after 17.393091817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:06.103387  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.250507  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.317837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:06.603460  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.750558  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.817614  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.103902  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.250598  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.316702  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.602834  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.754146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.822685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.103768  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.251042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.316848  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.603426  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.750576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.841843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.103764  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.250354  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.331806  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.605318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.750657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.817095  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.103398  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.255408  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.318022  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.603132  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.750403  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.818293  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.104225  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.250993  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.317127  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.603016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.749773  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.817866  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.103202  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.255976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.317255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.604954  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.750466  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.817799  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.121875  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.251358  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.317771  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.603035  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.749741  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.816693  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.103790  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.250141  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.317253  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.603881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.751654  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.834207  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.104408  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.249815  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.316650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.602801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.750009  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.817116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.120769  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.251147  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.352347  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.603722  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.749988  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.817248  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.104049  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.250170  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.317087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.603966  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.751038  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.817272  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.104249  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.254111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.354335  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.603774  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.750446  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.820222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.104228  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.250204  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.317641  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.603235  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.750469  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.817720  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.103219  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.249901  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.354982  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.603352  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.750342  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.816943  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.104120  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.250875  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.316432  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.604183  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.751198  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.851690  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.103478  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.249326  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:22.318236  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.605156  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.750311  704660 kapi.go:107] duration metric: took 1m1.004091859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:30:22.818417  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.103467  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.254761  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:23.317834  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.603470  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.816589  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.105925  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.317505  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.604867  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.802347  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.547475184s)
	W1002 20:30:24.802389  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.802426  704660 retry.go:31] will retry after 27.998098838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.817602  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.106548  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.317082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.603074  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.817303  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.103771  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.316828  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.603416  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.816576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.102651  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.316355  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.603434  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.816609  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.103586  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.318112  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.604364  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.816965  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.103801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.317624  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.603114  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.817415  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.103838  704660 kapi.go:107] duration metric: took 1m11.004121778s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:30:30.316991  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.817460  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.316734  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.817416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.321137  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.818165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:33.318614  704660 kapi.go:107] duration metric: took 1m9.005007455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:30:33.319986  704660 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-991638 cluster.
	I1002 20:30:33.321179  704660 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:30:33.322167  704660 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:30:52.801095  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:30:53.728667  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:30:53.728763  704660 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:30:53.731775  704660 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, registry-creds, volcano, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 20:30:53.733577  704660 addons.go:514] duration metric: took 1m44.75893549s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns registry-creds volcano storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 20:30:53.733631  704660 start.go:246] waiting for cluster config update ...
	I1002 20:30:53.733654  704660 start.go:255] writing updated cluster config ...
	I1002 20:30:53.733956  704660 ssh_runner.go:195] Run: rm -f paused
	I1002 20:30:53.738361  704660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:53.742889  704660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.750373  704660 pod_ready.go:94] pod "coredns-66bc5c9577-wkwnx" is "Ready"
	I1002 20:30:53.750443  704660 pod_ready.go:86] duration metric: took 7.51962ms for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.752616  704660 pod_ready.go:83] waiting for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.757985  704660 pod_ready.go:94] pod "etcd-addons-991638" is "Ready"
	I1002 20:30:53.758011  704660 pod_ready.go:86] duration metric: took 5.320347ms for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.760125  704660 pod_ready.go:83] waiting for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.764465  704660 pod_ready.go:94] pod "kube-apiserver-addons-991638" is "Ready"
	I1002 20:30:53.764491  704660 pod_ready.go:86] duration metric: took 4.30499ms for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.766969  704660 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.142419  704660 pod_ready.go:94] pod "kube-controller-manager-addons-991638" is "Ready"
	I1002 20:30:54.142449  704660 pod_ready.go:86] duration metric: took 375.451024ms for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.342704  704660 pod_ready.go:83] waiting for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.742276  704660 pod_ready.go:94] pod "kube-proxy-xfnp6" is "Ready"
	I1002 20:30:54.742307  704660 pod_ready.go:86] duration metric: took 399.528424ms for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.943143  704660 pod_ready.go:83] waiting for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344485  704660 pod_ready.go:94] pod "kube-scheduler-addons-991638" is "Ready"
	I1002 20:30:55.344522  704660 pod_ready.go:86] duration metric: took 401.35166ms for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344539  704660 pod_ready.go:40] duration metric: took 1.606141213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:55.401584  704660 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:30:55.403167  704660 out.go:179] * Done! kubectl is now configured to use "addons-991638" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 20:30:16 addons-991638 dockerd[1126]: time="2025-10-02T20:30:16.318971281Z" level=warning msg="reference for unknown type: " digest="sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5" remote="registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Oct 02 20:30:16 addons-991638 dockerd[1126]: time="2025-10-02T20:30:16.401542650Z" level=info msg="ignoring event" container=36a76c555a0b486e09600eaf3aa913079dd061e6679c39805fafd0adb2c49ef0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:30:16 addons-991638 dockerd[1126]: time="2025-10-02T20:30:16.429282767Z" level=info msg="ignoring event" container=bafc47ffd179a1d302af85b251380b5a9e18404eb40f787ad361fa7f10af66f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:30:17 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:17Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5: Status: Downloaded newer image for registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Oct 02 20:30:17 addons-991638 dockerd[1126]: time="2025-10-02T20:30:17.724789897Z" level=warning msg="reference for unknown type: " digest="sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0" remote="registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Oct 02 20:30:18 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:18Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Oct 02 20:30:18 addons-991638 dockerd[1126]: time="2025-10-02T20:30:18.950240927Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Oct 02 20:30:20 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:20Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Oct 02 20:30:20 addons-991638 dockerd[1126]: time="2025-10-02T20:30:20.496900099Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Oct 02 20:30:21 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:21Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Oct 02 20:30:22 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41ae5a531cce1fa314ae8d87eab53574ab9fbb7a6c579220cb294416b6639dd7/resolv.conf as [nameserver 10.96.0.10 search volcano-system.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:30:22 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:22Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
	Oct 02 20:30:22 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/38baae6c52ebcef148e703cd835d286c80c18d33e7997ed28ae2e164d7a10616/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:30:23 addons-991638 dockerd[1126]: time="2025-10-02T20:30:23.198978392Z" level=warning msg="reference for unknown type: " digest="sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef" remote="registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef"
	Oct 02 20:30:28 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/348e7e1895dcb9189712ef7624ca258b0960a65ea8b244a2821d9f5e88b0435d/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:30:28 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:28Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.13.2@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef"
	Oct 02 20:30:28 addons-991638 dockerd[1126]: time="2025-10-02T20:30:28.851731891Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 02 20:30:32 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:30:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 02 20:31:13 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:31:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c72fb8946840c32abb07852f1d79722f9afdf8c6ba0060f65c167b361c76e07d/resolv.conf as [nameserver 10.96.0.10 search my-volcano.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:31:14 addons-991638 dockerd[1126]: time="2025-10-02T20:31:14.087072954Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:31:30 addons-991638 dockerd[1126]: time="2025-10-02T20:31:30.770827033Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:31:57 addons-991638 dockerd[1126]: time="2025-10-02T20:31:57.783240463Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:32:38 addons-991638 dockerd[1126]: time="2025-10-02T20:32:38.852273341Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:32:38 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:32:38Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:34:00 addons-991638 dockerd[1126]: time="2025-10-02T20:34:00.766338224Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	629d3da1f7541       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 3 minutes ago       Running             gcp-auth                                 0                   348e7e1895dcb       gcp-auth-78565c9fb4-mprcr                  gcp-auth
	810d41d3d1f91       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             3 minutes ago       Running             controller                               0                   38baae6c52ebc       ingress-nginx-controller-9cc49f96f-g6rz7   ingress-nginx
	34a6affb65862       volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001                                         3 minutes ago       Running             admission                                0                   41ae5a531cce1       volcano-admission-6c447bd768-v68lw         volcano-system
	7fe1ae5b58acc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	087c9272590bb       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          3 minutes ago       Running             csi-provisioner                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f673a92f38d37       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            3 minutes ago       Running             liveness-probe                           0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	26e913322af4f       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           3 minutes ago       Running             hostpath                                 0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f33b41dff54c1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                3 minutes ago       Running             node-driver-registrar                    0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	8c93b919c5b4b       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago       Running             csi-resizer                              0                   a9a8d56da7da5       csi-hostpath-resizer-0                     kube-system
	714339ab4a604       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago       Running             csi-external-health-monitor-controller   0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	3afb513dbbbaa       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago       Running             csi-attacher                             0                   5c0161b7af378       csi-hostpath-attacher-0                    kube-system
	4cffaf28bc290       volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34                                               4 minutes ago       Running             volcano-scheduler                        0                   d0b176831a0c8       volcano-scheduler-76c996c8bf-45dxm         volcano-system
	b9fcc2878d740       volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242                                      4 minutes ago       Running             volcano-controllers                      0                   2c841dae4d247       volcano-controllers-6fd4f85cb8-rsgcj       volcano-system
	3ef8d0f1a48cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              patch                                    0                   bf2651aa1dde2       ingress-nginx-admission-patch-z8w27        ingress-nginx
	0612a088672a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              create                                   0                   3e77d9aaaed22       ingress-nginx-admission-create-h2p7z       ingress-nginx
	edb7914b91d73       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   063272a1fd848       snapshot-controller-7d9fbc56b8-n92kj       kube-system
	df4c807a71bc6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            4 minutes ago       Running             gadget                                   0                   2dffa89109ee8       gadget-gq5qh                               gadget
	eebe9684b11cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   30e397fdcba62       snapshot-controller-7d9fbc56b8-htvkn       kube-system
	682324bbadca7       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago       Running             yakd                                     0                   a484bc3e97545       yakd-dashboard-5ff678cb9-lmbwf             yakd-dashboard
	7f30da4857b71       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       4 minutes ago       Running             local-path-provisioner                   0                   dbb862b79dae1       local-path-provisioner-648f6765c9-v6wrv    local-path-storage
	534aa87859e07       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              4 minutes ago       Running             registry-proxy                           0                   e61cfa914e8b8       registry-proxy-97fzv                       kube-system
	dc6958ff54fd4       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         4 minutes ago       Running             minikube-ingress-dns                     0                   c8ba98b08e917       kube-ingress-dns-minikube                  kube-system
	2380c15f69fdf       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        4 minutes ago       Running             metrics-server                           0                   5e5723de853e6       metrics-server-85b7d694d7-4vr85            kube-system
	0d246a41a7a9c       registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                                             4 minutes ago       Running             registry                                 0                   4d086ab219e01       registry-66898fdd98-6774f                  kube-system
	a6633043fb040       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   2c0138d29c98a       nvidia-device-plugin-daemonset-xtwll       kube-system
	000b57217fab1       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               4 minutes ago       Running             cloud-spanner-emulator                   0                   65767d59dcfc1       cloud-spanner-emulator-85f6b7fc65-jcxrq    default
	7b7e993c0e79f       ba04bb24b9575                                                                                                                                4 minutes ago       Running             storage-provisioner                      0                   48962134af601       storage-provisioner                        kube-system
	6691f55a72958       138784d87c9c5                                                                                                                                5 minutes ago       Running             coredns                                  0                   8d8b118e8d1e4       coredns-66bc5c9577-wkwnx                   kube-system
	484f1ee7ca6c4       05baa95f5142d                                                                                                                                5 minutes ago       Running             kube-proxy                               0                   9057048c41ea1       kube-proxy-xfnp6                           kube-system
	5dc910c8154e4       a1894772a478e                                                                                                                                5 minutes ago       Running             etcd                                     0                   c6f607736ce1a       etcd-addons-991638                         kube-system
	14517010441e5       b5f57ec6b9867                                                                                                                                5 minutes ago       Running             kube-scheduler                           0                   45e90d4f82e13       kube-scheduler-addons-991638               kube-system
	aac6857cf97a0       7eb2c6ff0c5a7                                                                                                                                5 minutes ago       Running             kube-controller-manager                  0                   b61da85a9eb0e       kube-controller-manager-addons-991638      kube-system
	a59993882d357       43911e833d64d                                                                                                                                5 minutes ago       Running             kube-apiserver                           0                   36c3274520a66       kube-apiserver-addons-991638               kube-system
	
	
	==> controller_ingress [810d41d3d1f9] <==
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	W1002 20:30:28.973084       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1002 20:30:28.973239       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I1002 20:30:28.984625       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1002 20:30:29.683889       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1002 20:30:29.697215       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1002 20:30:29.706323       7 nginx.go:273] "Starting NGINX Ingress controller"
	I1002 20:30:29.718196       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b8d60449-ae96-4c13-92a1-c389e5fce3f6", APIVersion:"v1", ResourceVersion:"754", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1002 20:30:29.719963       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"5926004e-4933-4581-9e6d-0da6edb9d128", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1002 20:30:29.720147       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"76b21a16-cdca-482a-bd26-5e6fea1a4b71", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1002 20:30:30.909112       7 nginx.go:319] "Starting NGINX process"
	I1002 20:30:30.909390       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1002 20:30:30.910173       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1002 20:30:30.910704       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:30:30.918480       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1002 20:30:30.918697       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-g6rz7"
	I1002 20:30:30.924600       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.934073       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.957588       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:30:30.957659       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1002 20:30:30.957685       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [6691f55a7295] <==
	[INFO] 10.244.0.7:47201 - 63697 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000091743s
	[INFO] 10.244.0.7:47201 - 41881 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002619619s
	[INFO] 10.244.0.7:47201 - 40794 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002771285s
	[INFO] 10.244.0.7:47201 - 57423 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000191904s
	[INFO] 10.244.0.7:47201 - 29961 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108481s
	[INFO] 10.244.0.7:35713 - 8952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191206s
	[INFO] 10.244.0.7:35713 - 8475 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100112s
	[INFO] 10.244.0.7:33033 - 27442 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128445s
	[INFO] 10.244.0.7:33033 - 27253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087024s
	[INFO] 10.244.0.7:45040 - 19609 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108638s
	[INFO] 10.244.0.7:45040 - 19412 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134558s
	[INFO] 10.244.0.7:37712 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001243118s
	[INFO] 10.244.0.7:37712 - 41124 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001461721s
	[INFO] 10.244.0.7:56368 - 25712 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121651s
	[INFO] 10.244.0.7:56368 - 25933 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087615s
	[INFO] 10.244.0.26:33665 - 7524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225356s
	[INFO] 10.244.0.26:36616 - 9923 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170948s
	[INFO] 10.244.0.26:57364 - 60911 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153093s
	[INFO] 10.244.0.26:49778 - 1221 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113478s
	[INFO] 10.244.0.26:50758 - 6790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157762s
	[INFO] 10.244.0.26:47970 - 38720 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085318s
	[INFO] 10.244.0.26:47839 - 36929 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002380387s
	[INFO] 10.244.0.26:52240 - 40464 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002084794s
	[INFO] 10.244.0.26:58902 - 63295 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001598231s
	[INFO] 10.244.0.26:38424 - 57615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001549484s
	
	
	==> describe nodes <==
	Name:               addons-991638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-991638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-991638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-991638
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-991638"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-991638
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:30:35 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:30:35 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:30:35 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:30:35 +0000   Thu, 02 Oct 2025 20:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-991638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 72f32394f70644d59920eb3322dfa720
	  System UUID:                86ebb095-120f-4f4a-aceb-13d70f79315b
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-jcxrq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  gadget                      gadget-gq5qh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  gcp-auth                    gcp-auth-78565c9fb4-mprcr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-g6rz7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m56s
	  kube-system                 coredns-66bc5c9577-wkwnx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m6s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpathplugin-22xqp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 etcd-addons-991638                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m11s
	  kube-system                 kube-apiserver-addons-991638                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-addons-991638       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-xfnp6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-addons-991638                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 metrics-server-85b7d694d7-4vr85             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m59s
	  kube-system                 nvidia-device-plugin-daemonset-xtwll        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 registry-66898fdd98-6774f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 registry-creds-764b6fb674-nsjx4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 registry-proxy-97fzv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 snapshot-controller-7d9fbc56b8-htvkn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-n92kj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-648f6765c9-v6wrv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  my-volcano                  test-job-nginx-0                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  volcano-system              volcano-admission-6c447bd768-v68lw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  volcano-system              volcano-controllers-6fd4f85cb8-rsgcj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  volcano-system              volcano-scheduler-76c996c8bf-45dxm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lmbwf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             588Mi (7%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 5m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m11s                  kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m11s                  kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m11s                  kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m7s                   node-controller  Node addons-991638 event: Registered Node addons-991638 in Controller
	  Normal   NodeReady                5m7s                   kubelet          Node addons-991638 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5dc910c8154e] <==
	{"level":"warn","ts":"2025-10-02T20:28:59.744801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.761235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.773004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.796857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.825855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.835763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.861875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.881048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.889633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.959804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.946219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.972286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.836192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.866041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.877941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.897162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.933812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.977588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.014404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.063387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.106303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.178294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.193258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.208837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.237195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36526","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [629d3da1f754] <==
	2025/10/02 20:30:32 GCP Auth Webhook started!
	2025/10/02 20:31:11 Ready to marshal response ...
	2025/10/02 20:31:11 Ready to write response ...
	2025/10/02 20:31:12 Ready to marshal response ...
	2025/10/02 20:31:12 Ready to write response ...
	
	
	==> kernel <==
	 20:34:14 up  3:16,  0 user,  load average: 0.28, 1.87, 2.71
	Linux addons-991638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [a59993882d35] <==
	W1002 20:29:38.229145       1 logging.go:55] [core] [Channel #318 SubChannel #319]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1002 20:29:50.793208       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.33:443: connect: connection refused" logger="UnhandledError"
	W1002 20:29:50.793366       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:29:50.793420       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:29:50.795224       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.33:443: connect: connection refused" logger="UnhandledError"
	E1002 20:29:50.799395       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.33:443: connect: connection refused" logger="UnhandledError"
	E1002 20:29:50.820337       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.33:443: connect: connection refused" logger="UnhandledError"
	I1002 20:29:50.973041       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 20:30:11.171620       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:12.251426       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:13.307589       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:14.379785       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:15.416811       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:16.446804       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:17.527571       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:18.573782       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:19.613253       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:20.628776       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:21.718017       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	W1002 20:30:22.801029       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.129.62:443: connect: connection refused
	I1002 20:31:11.991980       1 controller.go:667] quota admission added evaluator for: jobs.batch.volcano.sh
	I1002 20:31:12.032123       1 controller.go:667] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [aac6857cf97a] <==
	I1002 20:29:07.839197       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 20:29:07.840672       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:29:07.840843       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:29:07.840858       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:29:07.842990       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:29:07.844062       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 20:29:07.845776       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 20:29:07.846232       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:29:07.849397       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:29:07.849952       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:29:12.791568       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 20:29:15.264959       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 20:29:37.808774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:29:37.808923       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1002 20:29:37.808949       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1002 20:29:37.808971       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1002 20:29:37.809007       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1002 20:29:37.809026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1002 20:29:37.809044       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 20:29:37.809070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1002 20:29:37.809140       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:29:37.837564       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 20:29:37.842667       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:29:39.109356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:29:39.343960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [484f1ee7ca6c] <==
	I1002 20:29:10.144358       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:29:10.287533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:29:10.388187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:29:10.388220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:29:10.388302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:29:10.427067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:29:10.427117       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:29:10.431953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:29:10.432214       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:29:10.432229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:29:10.433939       1 config.go:200] "Starting service config controller"
	I1002 20:29:10.433950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:29:10.433980       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:29:10.433985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:29:10.433996       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:29:10.434000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:29:10.435854       1 config.go:309] "Starting node config controller"
	I1002 20:29:10.435864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:29:10.435871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:29:10.535044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:29:10.535084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:29:10.535128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14517010441e] <==
	E1002 20:29:00.811484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:00.815087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:00.815264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:00.815378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:29:00.815413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:00.815443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:00.815517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:00.815547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:00.815654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:29:00.815692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:29:00.815742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:29:01.619085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:29:01.626118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:29:01.726859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:01.845808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:01.894559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:01.899233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:29:01.914113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:29:01.933506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:01.941316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:02.102088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:02.108982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:02.129471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:29:02.240337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 20:29:04.797841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:32:10 addons-991638 kubelet[2264]: I1002 20:32:10.544726    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-6774f" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:32:15 addons-991638 kubelet[2264]: I1002 20:32:15.544892    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xtwll" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:32:23 addons-991638 kubelet[2264]: E1002 20:32:23.546702    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:32:35 addons-991638 kubelet[2264]: I1002 20:32:35.559053    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-97fzv" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:32:38 addons-991638 kubelet[2264]: E1002 20:32:38.855510    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 02 20:32:38 addons-991638 kubelet[2264]: E1002 20:32:38.855556    2264 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 02 20:32:38 addons-991638 kubelet[2264]: E1002 20:32:38.855625    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(4205c620-276a-4caf-ae6c-f51d48e8bda3): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:32:38 addons-991638 kubelet[2264]: E1002 20:32:38.855655    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:32:49 addons-991638 kubelet[2264]: E1002 20:32:49.545236    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:00 addons-991638 kubelet[2264]: E1002 20:33:00.546265    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:11 addons-991638 kubelet[2264]: E1002 20:33:11.545918    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:22 addons-991638 kubelet[2264]: E1002 20:33:22.545620    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:23 addons-991638 kubelet[2264]: E1002 20:33:23.182906    2264 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 20:33:23 addons-991638 kubelet[2264]: E1002 20:33:23.183004    2264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds podName:915a1770-063b-4100-8bfa-c7e4d2680639 nodeName:}" failed. No retries permitted until 2025-10-02 20:35:25.182984831 +0000 UTC m=+381.741206297 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds") pod "registry-creds-764b6fb674-nsjx4" (UID: "915a1770-063b-4100-8bfa-c7e4d2680639") : secret "registry-creds-gcr" not found
	Oct 02 20:33:31 addons-991638 kubelet[2264]: E1002 20:33:31.545206    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-nsjx4" podUID="915a1770-063b-4100-8bfa-c7e4d2680639"
	Oct 02 20:33:32 addons-991638 kubelet[2264]: I1002 20:33:32.545104    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-6774f" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:33:34 addons-991638 kubelet[2264]: E1002 20:33:34.545335    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:38 addons-991638 kubelet[2264]: I1002 20:33:38.545513    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xtwll" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:33:47 addons-991638 kubelet[2264]: E1002 20:33:47.547553    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:33:52 addons-991638 kubelet[2264]: I1002 20:33:52.545218    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-97fzv" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:34:00 addons-991638 kubelet[2264]: E1002 20:34:00.769937    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 02 20:34:00 addons-991638 kubelet[2264]: E1002 20:34:00.770013    2264 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 02 20:34:00 addons-991638 kubelet[2264]: E1002 20:34:00.770098    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(4205c620-276a-4caf-ae6c-f51d48e8bda3): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:34:00 addons-991638 kubelet[2264]: E1002 20:34:00.770147    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	Oct 02 20:34:12 addons-991638 kubelet[2264]: E1002 20:34:12.545219    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="4205c620-276a-4caf-ae6c-f51d48e8bda3"
	
	
	==> storage-provisioner [7b7e993c0e79] <==
	W1002 20:33:49.302279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:51.305342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:51.310888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:53.316874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:53.324392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:55.328105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:55.335869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:57.338931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:57.343812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:59.353255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:33:59.359144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:01.362170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:01.367243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:03.370331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:03.375624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:05.378853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:05.386059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:07.389258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:07.394043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:09.397790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:09.402510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:11.405820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:11.412292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:13.420826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:34:13.429316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
helpers_test.go:269: (dbg) Run:  kubectl --context addons-991638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 test-job-nginx-0
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-991638 describe pod ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 test-job-nginx-0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-991638 describe pod ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 test-job-nginx-0: exit status 1 (87.815197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h2p7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z8w27" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nsjx4" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-991638 describe pod ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 test-job-nginx-0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable volcano --alsologtostderr -v=1: (11.830015076s)
--- FAIL: TestAddons/serial/Volcano (211.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-991638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-991638 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-991638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c8ef7872-d301-45cb-9b5c-e7fc2319c39a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-02 20:49:01.329985667 +0000 UTC m=+1278.498163343
addons_test.go:252: (dbg) Run:  kubectl --context addons-991638 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-991638 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-991638/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:41:00 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.35
IPs:
IP:  10.244.0.35
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zlw9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7zlw9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-991638
Warning  Failed     8m                      kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m17s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m17s (x5 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     5m17s (x4 over 7m46s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m55s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m55s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-991638 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-991638 logs nginx -n default: exit status 1 (109.361918ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-991638 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-991638
helpers_test.go:243: (dbg) docker inspect addons-991638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	        "Created": "2025-10-02T20:28:36.164446632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 705058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:28:36.229753591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hosts",
	        "LogPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84-json.log",
	        "Name": "/addons-991638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-991638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-991638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	                "LowerDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-991638",
	                "Source": "/var/lib/docker/volumes/addons-991638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-991638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-991638",
	                "name.minikube.sigs.k8s.io": "addons-991638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "768c8a7310c370a43da0c26c5d036d5e7219705fa051b89897a391452ea6d9a6",
	            "SandboxKey": "/var/run/docker/netns/768c8a7310c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-991638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a0:60:40:27:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05f483610a0fe679b5a4ae4efa1318f553b88c9d264d6b136b55ee1eb76c3654",
	                    "EndpointID": "cbb01d4023b7a4128894d4e3144f6ccc9b60257273c0bfbde032cb7624cd4fb7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-991638",
	                        "ac51530cb591"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-991638 -n addons-991638
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 logs -n 25: (1.112034149s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p download-docker-039409 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p download-docker-039409                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p binary-mirror-067581 --alsologtostderr --binary-mirror http://127.0.0.1:39571 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p binary-mirror-067581                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ addons  │ disable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ start   │ -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:30 UTC │
	│ addons  │ addons-991638 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ ip      │ addons-991638 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ enable headlamp -p addons-991638 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ addons  │ addons-991638 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ addons  │ addons-991638 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:41 UTC │
	│ addons  │ addons-991638 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:41 UTC │ 02 Oct 25 20:41 UTC │
	│ addons  │ addons-991638 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:41 UTC │ 02 Oct 25 20:41 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                             │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:41 UTC │ 02 Oct 25 20:41 UTC │
	│ addons  │ addons-991638 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:41 UTC │ 02 Oct 25 20:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:28:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:28:10.231562  704660 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:10.231700  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.231711  704660 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:10.231716  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.232008  704660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:28:10.232510  704660 out.go:368] Setting JSON to false
	I1002 20:28:10.233399  704660 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11417,"bootTime":1759425473,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:28:10.233494  704660 start.go:140] virtualization:  
	I1002 20:28:10.236719  704660 out.go:179] * [addons-991638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:28:10.240328  704660 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:28:10.240425  704660 notify.go:220] Checking for updates...
	I1002 20:28:10.246179  704660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:28:10.249006  704660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:28:10.251947  704660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:28:10.255157  704660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:28:10.257883  704660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:28:10.260862  704660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:28:10.288692  704660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:28:10.288859  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.345302  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.335898449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.345417  704660 docker.go:318] overlay module found
	I1002 20:28:10.348598  704660 out.go:179] * Using the docker driver based on user configuration
	I1002 20:28:10.351429  704660 start.go:304] selected driver: docker
	I1002 20:28:10.351448  704660 start.go:924] validating driver "docker" against <nil>
	I1002 20:28:10.351462  704660 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:28:10.352198  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.405054  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.396474632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.405212  704660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:28:10.405467  704660 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:28:10.408345  704660 out.go:179] * Using Docker driver with root privileges
	I1002 20:28:10.411100  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:10.411184  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:10.411197  704660 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:28:10.411276  704660 start.go:348] cluster config:
	{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1002 20:28:10.414279  704660 out.go:179] * Starting "addons-991638" primary control-plane node in "addons-991638" cluster
	I1002 20:28:10.417120  704660 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:28:10.419910  704660 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:28:10.422725  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:10.422776  704660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:28:10.422791  704660 cache.go:58] Caching tarball of preloaded images
	I1002 20:28:10.422838  704660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:28:10.422873  704660 preload.go:233] Found /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:28:10.422902  704660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 20:28:10.423255  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:10.423397  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json: {Name:mk2f26d255d9ea8bd15922b678de4d5774eef391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:10.438348  704660 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:28:10.438495  704660 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:28:10.438518  704660 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:28:10.438524  704660 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:28:10.438532  704660 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:28:10.438537  704660 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:28:28.104678  704660 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:28:28.104717  704660 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:28:28.104748  704660 start.go:360] acquireMachinesLock for addons-991638: {Name:mk53aeb56b1e9fb052ee11df133ba143769d5932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:28:28.104882  704660 start.go:364] duration metric: took 113.831µs to acquireMachinesLock for "addons-991638"
	I1002 20:28:28.104912  704660 start.go:93] Provisioning new machine with config: &{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:28:28.104985  704660 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:28:28.108517  704660 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:28:28.108807  704660 start.go:159] libmachine.API.Create for "addons-991638" (driver="docker")
	I1002 20:28:28.108861  704660 client.go:168] LocalClient.Create starting
	I1002 20:28:28.108989  704660 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem
	I1002 20:28:28.920995  704660 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem
	I1002 20:28:29.719304  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:28:29.735220  704660 cli_runner.go:211] docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:28:29.735320  704660 network_create.go:284] running [docker network inspect addons-991638] to gather additional debugging logs...
	I1002 20:28:29.735342  704660 cli_runner.go:164] Run: docker network inspect addons-991638
	W1002 20:28:29.756033  704660 cli_runner.go:211] docker network inspect addons-991638 returned with exit code 1
	I1002 20:28:29.756065  704660 network_create.go:287] error running [docker network inspect addons-991638]: docker network inspect addons-991638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-991638 not found
	I1002 20:28:29.756079  704660 network_create.go:289] output of [docker network inspect addons-991638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-991638 not found
	
	** /stderr **
	I1002 20:28:29.756173  704660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:29.772458  704660 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d5e320}
	I1002 20:28:29.772498  704660 network_create.go:124] attempt to create docker network addons-991638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:28:29.772554  704660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-991638 addons-991638
	I1002 20:28:29.829752  704660 network_create.go:108] docker network addons-991638 192.168.49.0/24 created
	I1002 20:28:29.829781  704660 kic.go:121] calculated static IP "192.168.49.2" for the "addons-991638" container
	I1002 20:28:29.829879  704660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:28:29.847391  704660 cli_runner.go:164] Run: docker volume create addons-991638 --label name.minikube.sigs.k8s.io=addons-991638 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:28:29.864875  704660 oci.go:103] Successfully created a docker volume addons-991638
	I1002 20:28:29.864995  704660 cli_runner.go:164] Run: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:28:32.119965  704660 cli_runner.go:217] Completed: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.254927204s)
	I1002 20:28:32.120005  704660 oci.go:107] Successfully prepared a docker volume addons-991638
	I1002 20:28:32.120024  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:32.120045  704660 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:28:32.120115  704660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:28:36.088209  704660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.968050647s)
	I1002 20:28:36.088240  704660 kic.go:203] duration metric: took 3.968193754s to extract preloaded images to volume ...
	W1002 20:28:36.088386  704660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:28:36.088487  704660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:28:36.149550  704660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-991638 --name addons-991638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-991638 --network addons-991638 --ip 192.168.49.2 --volume addons-991638:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:28:36.432531  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Running}}
	I1002 20:28:36.459147  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:36.484423  704660 cli_runner.go:164] Run: docker exec addons-991638 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:28:36.539034  704660 oci.go:144] the created container "addons-991638" has a running status.
	I1002 20:28:36.539068  704660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa...
	I1002 20:28:37.262683  704660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:28:37.288911  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.309985  704660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:28:37.310010  704660 kic_runner.go:114] Args: [docker exec --privileged addons-991638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:28:37.369831  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.391035  704660 machine.go:93] provisionDockerMachine start ...
	I1002 20:28:37.391126  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.411223  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.411540  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.411549  704660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:28:37.553086  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.553108  704660 ubuntu.go:182] provisioning hostname "addons-991638"
	I1002 20:28:37.553169  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.575369  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.575674  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.575686  704660 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-991638 && echo "addons-991638" | sudo tee /etc/hostname
	I1002 20:28:37.721568  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.721652  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.747484  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.747789  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.747811  704660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-991638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-991638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-991638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:28:37.877526  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:28:37.877550  704660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-702037/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-702037/.minikube}
	I1002 20:28:37.877573  704660 ubuntu.go:190] setting up certificates
	I1002 20:28:37.877582  704660 provision.go:84] configureAuth start
	I1002 20:28:37.877644  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:37.894231  704660 provision.go:143] copyHostCerts
	I1002 20:28:37.894324  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem (1078 bytes)
	I1002 20:28:37.894448  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem (1123 bytes)
	I1002 20:28:37.894507  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem (1675 bytes)
	I1002 20:28:37.894559  704660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem org=jenkins.addons-991638 san=[127.0.0.1 192.168.49.2 addons-991638 localhost minikube]
	I1002 20:28:38.951532  704660 provision.go:177] copyRemoteCerts
	I1002 20:28:38.951598  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:28:38.951639  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:38.968871  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:39.069322  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:28:39.087473  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:28:39.106442  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:28:39.125193  704660 provision.go:87] duration metric: took 1.247587619s to configureAuth
	I1002 20:28:39.125222  704660 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:28:39.125407  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:28:39.125491  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.145970  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.146282  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.146299  704660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 20:28:39.282106  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 20:28:39.282131  704660 ubuntu.go:71] root file system type: overlay
	I1002 20:28:39.282235  704660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 20:28:39.282310  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.300258  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.300556  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.300651  704660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 20:28:39.442933  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 20:28:39.443023  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.460361  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.460680  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.460703  704660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 20:28:40.382609  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-02 20:28:39.437593143 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 20:28:40.382680  704660 machine.go:96] duration metric: took 2.991625077s to provisionDockerMachine
	I1002 20:28:40.382776  704660 client.go:171] duration metric: took 12.273900895s to LocalClient.Create
	I1002 20:28:40.382819  704660 start.go:167] duration metric: took 12.27401677s to libmachine.API.Create "addons-991638"
	I1002 20:28:40.382841  704660 start.go:293] postStartSetup for "addons-991638" (driver="docker")
	I1002 20:28:40.382863  704660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:28:40.382961  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:28:40.383028  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.400184  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.497649  704660 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:28:40.501057  704660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:28:40.501087  704660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:28:40.501099  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/addons for local assets ...
	I1002 20:28:40.501170  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/files for local assets ...
	I1002 20:28:40.501198  704660 start.go:296] duration metric: took 118.339458ms for postStartSetup
	I1002 20:28:40.501542  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.519025  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:40.519322  704660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:28:40.519374  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.535401  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.626314  704660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:28:40.631258  704660 start.go:128] duration metric: took 12.526256292s to createHost
	I1002 20:28:40.631280  704660 start.go:83] releasing machines lock for "addons-991638", held for 12.526385541s
	I1002 20:28:40.631365  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.648027  704660 ssh_runner.go:195] Run: cat /version.json
	I1002 20:28:40.648051  704660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:28:40.648079  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.648112  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.671874  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.672768  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.765471  704660 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:40.858838  704660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:28:40.863487  704660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:28:40.863561  704660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:28:40.891689  704660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:28:40.891716  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:40.891748  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:40.891847  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:40.905197  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:28:40.914585  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:28:40.923483  704660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:28:40.923613  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:28:40.932751  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.941795  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:28:40.950514  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.959583  704660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:28:40.967941  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:28:40.976883  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:28:40.986149  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:28:40.995305  704660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:28:41.004003  704660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:28:41.012739  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.128237  704660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:28:41.231332  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:41.231381  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:41.231441  704660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 20:28:41.246943  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.259982  704660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:28:41.299529  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.312040  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:28:41.325475  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:41.339679  704660 ssh_runner.go:195] Run: which cri-dockerd
	I1002 20:28:41.343375  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 20:28:41.351275  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 20:28:41.364332  704660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 20:28:41.484463  704660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 20:28:41.601245  704660 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 20:28:41.601360  704660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 20:28:41.614352  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 20:28:41.626868  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.733314  704660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 20:28:42.111293  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:28:42.128509  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 20:28:42.145965  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.163934  704660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 20:28:42.308063  704660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 20:28:42.433113  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.552919  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 20:28:42.569022  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 20:28:42.582319  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.699949  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 20:28:42.769589  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.783022  704660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 20:28:42.783145  704660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 20:28:42.787107  704660 start.go:563] Will wait 60s for crictl version
	I1002 20:28:42.787194  704660 ssh_runner.go:195] Run: which crictl
	I1002 20:28:42.790829  704660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:28:42.815945  704660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 20:28:42.816103  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.842953  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.874688  704660 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 20:28:42.874787  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:42.890887  704660 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:28:42.895320  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:42.906278  704660 kubeadm.go:883] updating cluster {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:28:42.906402  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:42.906467  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.925708  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.925733  704660 docker.go:621] Images already preloaded, skipping extraction
	I1002 20:28:42.925801  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.945361  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.945383  704660 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:28:42.945393  704660 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1002 20:28:42.945504  704660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-991638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:28:42.945582  704660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 20:28:42.996799  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:42.996828  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:42.996844  704660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:28:42.996865  704660 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-991638 NodeName:addons-991638 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:28:42.996983  704660 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-991638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:28:42.997055  704660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:28:43.006552  704660 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:28:43.006645  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:28:43.015646  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1002 20:28:43.030545  704660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:28:43.044123  704660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 20:28:43.057931  704660 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:28:43.061696  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:43.072014  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:43.187259  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:28:43.203829  704660 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638 for IP: 192.168.49.2
	I1002 20:28:43.203899  704660 certs.go:195] generating shared ca certs ...
	I1002 20:28:43.203929  704660 certs.go:227] acquiring lock for ca certs: {Name:mk80feb87d46a3c61de00b383dd8ac7fd2e19095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:43.204734  704660 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key
	I1002 20:28:44.637131  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt ...
	I1002 20:28:44.637163  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt: {Name:mkb6d8319d3a74d42b081683e314c37e53586717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637366  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key ...
	I1002 20:28:44.637379  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key: {Name:mkbd44d267c3b1cf1fed0a906ac7bf46743d8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637481  704660 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key
	I1002 20:28:45.683223  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt ...
	I1002 20:28:45.683262  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt: {Name:mkf2892474e0dfa857be215b552060af628196ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683490  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key ...
	I1002 20:28:45.683507  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key: {Name:mkb3e427bf0a6e7ceb613b926e3c90e07409da52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683588  704660 certs.go:257] generating profile certs ...
	I1002 20:28:45.683654  704660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key
	I1002 20:28:45.683671  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt with IP's: []
	I1002 20:28:46.046463  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt ...
	I1002 20:28:46.046497  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: {Name:mk51f9d8abe3f7006e638458dae2df70cdaa936a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046676  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key ...
	I1002 20:28:46.046691  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key: {Name:mke5acc604e8c4ff883546df37d116f9c766e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046773  704660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b
	I1002 20:28:46.046795  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:28:46.569113  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b ...
	I1002 20:28:46.569145  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b: {Name:mk40a7d58b6523a132d065d0266597e722b3762d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.569955  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b ...
	I1002 20:28:46.569974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b: {Name:mkbe601cfd4f3105ca705f6de8b8f9d490a11ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.570609  704660 certs.go:382] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt
	I1002 20:28:46.570694  704660 certs.go:386] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key
	I1002 20:28:46.570747  704660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key
	I1002 20:28:46.570767  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt with IP's: []
	I1002 20:28:46.754716  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt ...
	I1002 20:28:46.754747  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt: {Name:mkd0f46ec8109fe64dda020f7c270bd3d9dd6bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.754958  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key ...
	I1002 20:28:46.754974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key: {Name:mk7b62b96428d619ab88e3c0c6972f37ee378b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.755195  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:28:46.755238  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:28:46.755269  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:28:46.755294  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem (1675 bytes)
	I1002 20:28:46.755827  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:28:46.773406  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:28:46.790954  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:28:46.807835  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:28:46.825141  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:28:46.842372  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:28:46.860238  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:28:46.877776  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:28:46.894424  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:28:46.911754  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:28:46.925117  704660 ssh_runner.go:195] Run: openssl version
	I1002 20:28:46.931161  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:28:46.940887  704660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945128  704660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945198  704660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.986089  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:28:46.995228  704660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:28:46.998614  704660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:28:46.998670  704660 kubeadm.go:400] StartCluster: {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:28:46.998801  704660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:28:47.017260  704660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:28:47.024934  704660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:28:47.032572  704660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:28:47.032637  704660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:28:47.040541  704660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:28:47.040563  704660 kubeadm.go:157] found existing configuration files:
	
	I1002 20:28:47.040632  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:28:47.048232  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:28:47.048324  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:28:47.055897  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:28:47.063851  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:28:47.063972  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:28:47.071920  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.079791  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:28:47.079884  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.087482  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:28:47.095260  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:28:47.095325  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:28:47.102743  704660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:28:47.143961  704660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:28:47.144023  704660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:28:47.171162  704660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:28:47.171292  704660 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:28:47.171362  704660 kubeadm.go:318] OS: Linux
	I1002 20:28:47.171451  704660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:28:47.171534  704660 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:28:47.171621  704660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:28:47.171707  704660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:28:47.171790  704660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:28:47.171876  704660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:28:47.171956  704660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:28:47.172038  704660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:28:47.172128  704660 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:28:47.235837  704660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:28:47.235957  704660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:28:47.236052  704660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:28:47.257841  704660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:28:47.262676  704660 out.go:252]   - Generating certificates and keys ...
	I1002 20:28:47.262771  704660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:28:47.262845  704660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:28:47.756271  704660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:28:48.584093  704660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:28:48.888267  704660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:28:49.699713  704660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:28:50.057163  704660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:28:50.057649  704660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.779363  704660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:28:50.779734  704660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.900170  704660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:28:51.497655  704660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:28:51.954519  704660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:28:51.954818  704660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:28:53.080191  704660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:28:53.266970  704660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:28:53.973649  704660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:28:54.725487  704660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:28:55.109834  704660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:28:55.110186  704660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:28:55.113467  704660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:28:55.117318  704660 out.go:252]   - Booting up control plane ...
	I1002 20:28:55.117435  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:28:55.117518  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:28:55.118060  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:28:55.141929  704660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:28:55.142323  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:28:55.150629  704660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:28:55.150957  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:28:55.151008  704660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:28:55.286296  704660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:28:55.286428  704660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:28:56.789783  704660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501225822s
	I1002 20:28:56.789937  704660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:28:56.790047  704660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:28:56.790165  704660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:28:56.790264  704660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:28:58.802179  704660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.012574504s
	I1002 20:29:00.806811  704660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.017417752s
	I1002 20:29:02.791474  704660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002021418s
	I1002 20:29:02.814104  704660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:29:02.827699  704660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:29:02.846247  704660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:29:02.846862  704660 kubeadm.go:318] [mark-control-plane] Marking the node addons-991638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:29:02.861722  704660 kubeadm.go:318] [bootstrap-token] Using token: z0jdd4.ysfi1vhms678tv6t
	I1002 20:29:02.864796  704660 out.go:252]   - Configuring RBAC rules ...
	I1002 20:29:02.864929  704660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:29:02.869885  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:29:02.888805  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:29:02.892893  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:29:02.897307  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:29:02.902794  704660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:29:03.198711  704660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:29:03.626604  704660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:29:04.197660  704660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:29:04.199081  704660 kubeadm.go:318] 
	I1002 20:29:04.199168  704660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:29:04.199174  704660 kubeadm.go:318] 
	I1002 20:29:04.199283  704660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:29:04.199304  704660 kubeadm.go:318] 
	I1002 20:29:04.199332  704660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:29:04.199403  704660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:29:04.199462  704660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:29:04.199470  704660 kubeadm.go:318] 
	I1002 20:29:04.199544  704660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:29:04.199559  704660 kubeadm.go:318] 
	I1002 20:29:04.199633  704660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:29:04.199648  704660 kubeadm.go:318] 
	I1002 20:29:04.199708  704660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:29:04.199805  704660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:29:04.199891  704660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:29:04.199904  704660 kubeadm.go:318] 
	I1002 20:29:04.199999  704660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:29:04.200089  704660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:29:04.200099  704660 kubeadm.go:318] 
	I1002 20:29:04.200207  704660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200351  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 \
	I1002 20:29:04.200382  704660 kubeadm.go:318] 	--control-plane 
	I1002 20:29:04.200390  704660 kubeadm.go:318] 
	I1002 20:29:04.200503  704660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:29:04.200516  704660 kubeadm.go:318] 
	I1002 20:29:04.200612  704660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200736  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 
	I1002 20:29:04.203776  704660 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:29:04.204016  704660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:29:04.204131  704660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:29:04.204150  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:29:04.204164  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:29:04.207498  704660 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:29:04.210410  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:29:04.217868  704660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:29:04.235604  704660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:29:04.235701  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.235739  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-991638 minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-991638 minikube.k8s.io/primary=true
	I1002 20:29:04.254399  704660 ops.go:34] apiserver oom_adj: -16
	I1002 20:29:04.369134  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.869740  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.370081  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.870196  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.369731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.870115  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.369228  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.869851  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.369279  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.869731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.972720  704660 kubeadm.go:1113] duration metric: took 4.737085496s to wait for elevateKubeSystemPrivileges
	I1002 20:29:08.972751  704660 kubeadm.go:402] duration metric: took 21.974085235s to StartCluster
	I1002 20:29:08.972769  704660 settings.go:142] acquiring lock: {Name:mk05279472feb5063a5c2285eba6fd6d59490060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.972884  704660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:29:08.973255  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/kubeconfig: {Name:mk451cd073bc3a44904ff8d0351225145e56e5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.973483  704660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:29:08.973596  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:29:08.973840  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.973881  704660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:29:08.973962  704660 addons.go:69] Setting yakd=true in profile "addons-991638"
	I1002 20:29:08.973977  704660 addons.go:238] Setting addon yakd=true in "addons-991638"
	I1002 20:29:08.973998  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.974491  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.974944  704660 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-991638"
	I1002 20:29:08.974969  704660 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-991638"
	I1002 20:29:08.974993  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.975410  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.975798  704660 addons.go:69] Setting cloud-spanner=true in profile "addons-991638"
	I1002 20:29:08.975820  704660 addons.go:238] Setting addon cloud-spanner=true in "addons-991638"
	I1002 20:29:08.975844  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.976228  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.978568  704660 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-991638"
	I1002 20:29:08.978639  704660 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:08.978669  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.979258  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.980070  704660 out.go:179] * Verifying Kubernetes components...
	I1002 20:29:08.980299  704660 addons.go:69] Setting registry-creds=true in profile "addons-991638"
	I1002 20:29:08.980320  704660 addons.go:238] Setting addon registry-creds=true in "addons-991638"
	I1002 20:29:08.980348  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.980878  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984024  704660 addons.go:69] Setting storage-provisioner=true in profile "addons-991638"
	I1002 20:29:08.984111  704660 addons.go:238] Setting addon storage-provisioner=true in "addons-991638"
	I1002 20:29:08.985311  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984905  704660 addons.go:69] Setting default-storageclass=true in profile "addons-991638"
	I1002 20:29:08.986095  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-991638"
	I1002 20:29:08.986385  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.997940  704660 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-991638"
	I1002 20:29:08.997997  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-991638"
	I1002 20:29:08.998330  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984914  704660 addons.go:69] Setting gcp-auth=true in profile "addons-991638"
	I1002 20:29:08.998967  704660 mustload.go:65] Loading cluster: addons-991638
	I1002 20:29:08.999148  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.999394  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984921  704660 addons.go:69] Setting ingress=true in profile "addons-991638"
	I1002 20:29:09.012451  704660 addons.go:238] Setting addon ingress=true in "addons-991638"
	I1002 20:29:09.012506  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.012981  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.017454  704660 addons.go:69] Setting volcano=true in profile "addons-991638"
	I1002 20:29:09.017490  704660 addons.go:238] Setting addon volcano=true in "addons-991638"
	I1002 20:29:09.017527  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.018061  704660 addons.go:69] Setting volumesnapshots=true in profile "addons-991638"
	I1002 20:29:09.018133  704660 addons.go:238] Setting addon volumesnapshots=true in "addons-991638"
	I1002 20:29:09.018173  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984925  704660 addons.go:69] Setting ingress-dns=true in profile "addons-991638"
	I1002 20:29:09.025533  704660 addons.go:238] Setting addon ingress-dns=true in "addons-991638"
	I1002 20:29:09.025587  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.026063  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.044490  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984928  704660 addons.go:69] Setting inspektor-gadget=true in profile "addons-991638"
	I1002 20:29:09.049039  704660 addons.go:238] Setting addon inspektor-gadget=true in "addons-991638"
	I1002 20:29:09.049079  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.049563  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984931  704660 addons.go:69] Setting metrics-server=true in profile "addons-991638"
	I1002 20:29:09.074105  704660 addons.go:238] Setting addon metrics-server=true in "addons-991638"
	I1002 20:29:09.074149  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.075253  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984945  704660 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-991638"
	I1002 20:29:09.101041  704660 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-991638"
	I1002 20:29:09.101085  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.101634  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.134221  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984949  704660 addons.go:69] Setting registry=true in profile "addons-991638"
	I1002 20:29:09.134685  704660 addons.go:238] Setting addon registry=true in "addons-991638"
	I1002 20:29:09.134721  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.135150  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.166068  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.985251  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:29:09.210573  704660 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:29:09.222512  704660 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:29:09.228645  704660 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:09.228697  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:29:09.228802  704660 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:09.228834  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:29:09.228917  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.232353  704660 addons.go:238] Setting addon default-storageclass=true in "addons-991638"
	I1002 20:29:09.232403  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.232836  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.240129  704660 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:29:09.228818  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252033  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:29:09.281457  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.289194  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.276652  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:29:09.291469  704660 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:29:09.291547  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252086  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317140  704660 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-991638"
	I1002 20:29:09.317269  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317905  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.321130  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:29:09.324328  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:29:09.329618  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:29:09.329846  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:09.329862  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:29:09.329924  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.330072  704660 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:29:09.332483  704660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:09.332506  704660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:29:09.332556  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.352512  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.359187  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:29:09.364275  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:29:09.364559  704660 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:09.364575  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:29:09.364638  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.375690  704660 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 20:29:09.375940  704660 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:29:09.386355  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:09.386396  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:29:09.386476  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.402265  704660 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 20:29:09.412773  704660 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 20:29:09.418587  704660 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:09.418666  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 20:29:09.418775  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.419320  704660 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:29:09.423729  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:29:09.423757  704660 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:29:09.423846  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.441567  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:29:09.442010  704660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:29:09.447860  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:29:09.451279  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:29:09.453459  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:09.453480  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:29:09.453561  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.455757  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:29:09.455822  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:29:09.455914  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.465113  704660 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:29:09.469477  704660 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:29:09.469509  704660 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:29:09.469576  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.479455  704660 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 20:29:09.482830  704660 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:29:09.487219  704660 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:29:09.487285  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:29:09.487386  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.498491  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.506413  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:29:09.509491  704660 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:29:09.509670  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:29:09.509687  704660 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:29:09.509759  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.515326  704660 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:09.515349  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:29:09.515413  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.556794  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.592629  704660 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:29:09.595721  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.601773  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.604845  704660 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:29:09.607957  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:09.607982  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:29:09.608078  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.639621  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.660885  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.690935  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.696294  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.717153  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.743500  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.746463  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.751738  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.757583  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.764350  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.764394  704660 retry.go:31] will retry after 315.573784ms: ssh: handshake failed: EOF
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.784428  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.784456  704660 retry.go:31] will retry after 304.179518ms: ssh: handshake failed: EOF
	I1002 20:29:09.898194  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:29:09.936055  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:29:10.111040  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:10.111126  704660 retry.go:31] will retry after 465.641139ms: ssh: handshake failed: EOF
	I1002 20:29:10.668679  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:29:10.668702  704660 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:29:10.797217  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:29:10.797297  704660 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:29:10.865274  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:10.881693  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:29:10.881716  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:29:10.886079  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:10.921408  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:10.943803  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:29:10.943828  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:29:10.978775  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:29:10.978805  704660 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:29:10.994840  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:11.011037  704660 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:29:11.011073  704660 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:29:11.030493  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:11.032022  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:29:11.032044  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:29:11.035800  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:29:11.035830  704660 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:29:11.071721  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:11.091723  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:11.106681  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:11.145109  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:29:11.145139  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:29:11.148280  704660 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.148309  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:29:11.202167  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.202196  704660 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:29:11.305203  704660 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.305232  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:29:11.316393  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:11.329281  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:29:11.329312  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:29:11.355129  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:11.398833  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.398857  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:29:11.409753  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:29:11.409781  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:29:11.426941  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.428747  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.489773  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:29:11.489841  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:29:11.494567  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.542853  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.615125  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:29:11.615198  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:29:11.677959  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:29:11.678040  704660 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:29:11.863554  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:29:11.863639  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:29:12.043926  704660 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.044010  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:29:12.200094  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:29:12.200165  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:29:12.470826  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.509295  704660 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.573157378s)
	I1002 20:29:12.509455  704660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.611238205s)
	I1002 20:29:12.509528  704660 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:29:12.511038  704660 node_ready.go:35] waiting up to 6m0s for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515289  704660 node_ready.go:49] node "addons-991638" is "Ready"
	I1002 20:29:12.515313  704660 node_ready.go:38] duration metric: took 3.935549ms for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515328  704660 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:29:12.515389  704660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:29:12.613485  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:29:12.613555  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:29:12.794628  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.92930886s)
	I1002 20:29:13.024378  704660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-991638" context rescaled to 1 replicas
	I1002 20:29:13.094487  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:29:13.094553  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:29:13.666276  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:29:13.666353  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:29:14.220703  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:14.220782  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:29:14.633137  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:16.743396  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:29:16.743479  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:16.772705  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:17.648047  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:29:17.758402  704660 addons.go:238] Setting addon gcp-auth=true in "addons-991638"
	I1002 20:29:17.758451  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:17.758915  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:17.782244  704660 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:29:17.782296  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:17.815647  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:19.091966  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.205841491s)
	I1002 20:29:19.092058  704660 addons.go:479] Verifying addon ingress=true in "addons-991638"
	I1002 20:29:19.092330  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.170806627s)
	I1002 20:29:19.092745  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.097877392s)
	I1002 20:29:19.092800  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.06227576s)
	I1002 20:29:19.095718  704660 out.go:179] * Verifying ingress addon...
	I1002 20:29:19.099717  704660 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:29:19.283832  704660 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:29:19.283853  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:19.648674  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.108386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.606825  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.102257  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.030489478s)
	I1002 20:29:21.102331  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.01058393s)
	I1002 20:29:21.102523  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995812674s)
	I1002 20:29:21.102576  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.786160691s)
	I1002 20:29:21.102665  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.747515739s)
	I1002 20:29:21.102736  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.675772832s)
	W1002 20:29:21.102757  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102773  704660 retry.go:31] will retry after 165.427061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102843  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.674073931s)
	I1002 20:29:21.102857  704660 addons.go:479] Verifying addon metrics-server=true in "addons-991638"
	I1002 20:29:21.102896  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.608257689s)
	I1002 20:29:21.102908  704660 addons.go:479] Verifying addon registry=true in "addons-991638"
	I1002 20:29:21.103092  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.560138876s)
	I1002 20:29:21.103416  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.632501338s)
	W1002 20:29:21.103659  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.103480  704660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.588080107s)
	I1002 20:29:21.103716  704660 api_server.go:72] duration metric: took 12.130202438s to wait for apiserver process to appear ...
	I1002 20:29:21.103723  704660 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:29:21.103737  704660 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:29:21.104569  704660 retry.go:31] will retry after 131.465799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.106517  704660 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-991638 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:29:21.106623  704660 out.go:179] * Verifying registry addon...
	I1002 20:29:21.110687  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:29:21.128889  704660 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:29:21.146707  704660 api_server.go:141] control plane version: v1.34.1
	I1002 20:29:21.146750  704660 api_server.go:131] duration metric: took 43.020902ms to wait for apiserver health ...
	I1002 20:29:21.146760  704660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:29:21.231778  704660 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:29:21.231803  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.232570  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.232990  704660 system_pods.go:59] 16 kube-system pods found
	I1002 20:29:21.233027  704660 system_pods.go:61] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233037  704660 system_pods.go:61] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233049  704660 system_pods.go:61] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.233054  704660 system_pods.go:61] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.233058  704660 system_pods.go:61] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.233072  704660 system_pods.go:61] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.233077  704660 system_pods.go:61] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.233082  704660 system_pods.go:61] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.233093  704660 system_pods.go:61] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.233100  704660 system_pods.go:61] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.233110  704660 system_pods.go:61] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.233117  704660 system_pods.go:61] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.233126  704660 system_pods.go:61] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.233138  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233145  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233152  704660 system_pods.go:61] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.233159  704660 system_pods.go:74] duration metric: took 86.393348ms to wait for pod list to return data ...
	I1002 20:29:21.233171  704660 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:29:21.236551  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:21.269271  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:29:21.290207  704660 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:29:21.375005  704660 default_sa.go:45] found service account: "default"
	I1002 20:29:21.375031  704660 default_sa.go:55] duration metric: took 141.854284ms for default service account to be created ...
	I1002 20:29:21.375042  704660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:29:21.403678  704660 system_pods.go:86] 17 kube-system pods found
	I1002 20:29:21.403714  704660 system_pods.go:89] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403724  704660 system_pods.go:89] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403730  704660 system_pods.go:89] "csi-hostpath-attacher-0" [e1b49a9e-cc2c-43ad-a104-7517ae3b9b71] Pending
	I1002 20:29:21.403736  704660 system_pods.go:89] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.403740  704660 system_pods.go:89] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.403744  704660 system_pods.go:89] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.403751  704660 system_pods.go:89] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.403755  704660 system_pods.go:89] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.403760  704660 system_pods.go:89] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.403767  704660 system_pods.go:89] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.403774  704660 system_pods.go:89] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.403789  704660 system_pods.go:89] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.403795  704660 system_pods.go:89] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.403857  704660 system_pods.go:89] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.403871  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403878  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403881  704660 system_pods.go:89] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.403889  704660 system_pods.go:126] duration metric: took 28.840694ms to wait for k8s-apps to be running ...
	I1002 20:29:21.403905  704660 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:29:21.403962  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:29:21.633145  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.633273  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.719440  704660 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.937165373s)
	I1002 20:29:21.723512  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:21.737044  704660 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:29:21.739233  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.105985614s)
	I1002 20:29:21.739269  704660 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:21.741380  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:29:21.741407  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:29:21.741519  704660 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:29:21.746220  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:29:21.749098  704660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:29:21.749124  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:21.885645  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:29:21.885723  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:29:21.999241  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:21.999306  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:29:22.103650  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.107641  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:22.115675  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.249646  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:22.603835  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.614145  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.750221  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.104878  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.113990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.250841  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.614664  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.616397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.754661  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.028432  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.791823015s)
	I1002 20:29:24.104308  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.114739  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.250667  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.302476  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.194753737s)
	I1002 20:29:24.302845  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.033536467s)
	W1002 20:29:24.302913  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302985  704660 retry.go:31] will retry after 309.54405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302944  704660 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.89841849s)
	I1002 20:29:24.303063  704660 system_svc.go:56] duration metric: took 2.899157354s WaitForService to wait for kubelet
	I1002 20:29:24.303086  704660 kubeadm.go:586] duration metric: took 15.329570576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:29:24.303134  704660 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:29:24.305338  704660 addons.go:479] Verifying addon gcp-auth=true in "addons-991638"
	I1002 20:29:24.308194  704660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:29:24.308224  704660 node_conditions.go:123] node cpu capacity is 2
	I1002 20:29:24.308238  704660 node_conditions.go:105] duration metric: took 5.087392ms to run NodePressure ...
	I1002 20:29:24.308251  704660 start.go:241] waiting for startup goroutines ...
	I1002 20:29:24.310445  704660 out.go:179] * Verifying gcp-auth addon...
	I1002 20:29:24.313602  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:29:24.325918  704660 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:29:24.325990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:24.603413  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.613652  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:24.613983  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.750444  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.817604  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.103685  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.118065  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.249976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.317010  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.603841  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.613949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.750092  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.817987  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.957381  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.343690162s)
	W1002 20:29:25.957546  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:25.957590  704660 retry.go:31] will retry after 334.218122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:26.104386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.114584  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.250032  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.292352  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:26.317525  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:26.604047  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.613938  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.750249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.817111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.103343  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.113575  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.250109  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.317078  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.444622  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.152189827s)
	W1002 20:29:27.444714  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.444752  704660 retry.go:31] will retry after 546.51266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.604261  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.614167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.749521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.817914  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.992173  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:28.104304  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.114156  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.249193  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.317122  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:28.603290  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.614437  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.749750  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.817014  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:29:28.983712  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:28.983784  704660 retry.go:31] will retry after 1.260023447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:29.103350  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.114454  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.249644  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.317067  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:29.602986  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.613726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.749688  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.816730  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.103822  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.114057  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.244571  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:30.250615  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.603619  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.614026  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.749853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.816479  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.103600  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.114190  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.249506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.298691  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.054084159s)
	W1002 20:29:31.298721  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.298741  704660 retry.go:31] will retry after 1.646308182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.316219  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.605040  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.631189  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.750015  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.817796  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.103881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.116470  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.250021  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.317307  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.604391  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.614775  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.750540  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.816630  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.946032  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:33.104871  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.115283  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.250183  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.317668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:33.603187  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.614529  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.749647  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.817102  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.018177  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.072106262s)
	W1002 20:29:34.018217  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.018266  704660 retry.go:31] will retry after 2.385257575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.104529  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.114836  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.250452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.318843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.603645  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.614617  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.750082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.817533  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.107703  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.114893  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.251718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.603848  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.613657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.750110  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.816940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.103942  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.113970  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.250099  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.316846  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.404147  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:36.604239  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.613891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.750685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.818255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.103487  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.114495  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.250302  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.316913  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.595720  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.191535427s)
	W1002 20:29:37.595768  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.595789  704660 retry.go:31] will retry after 3.1319796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.604699  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.613531  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.750080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.820120  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.135110  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.135518  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.251304  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.317891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.603678  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.614208  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.750230  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.817842  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.110039  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.123577  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.253100  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.320981  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.606978  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.619008  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.757188  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.821029  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.104171  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.114472  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.250599  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.316853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.603622  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.614494  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.728573  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:40.750499  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.817269  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.103718  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.113793  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.251438  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.323113  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.606477  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.615889  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.749940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.819471  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.104623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.115622  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.203580  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.474960878s)
	W1002 20:29:42.203682  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.203776  704660 retry.go:31] will retry after 7.48710054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.250824  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.317605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.603374  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.614191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.750400  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.816718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.103173  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.114483  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.249820  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.317639  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.603139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.614668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.750509  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.817740  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.103982  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.113850  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.250679  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.604766  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.615339  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.749664  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.817244  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.105520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.115165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.323737  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.603415  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.614694  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.750384  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.817336  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.104015  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.113900  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.250650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.316397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.603826  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.613857  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.750135  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.817184  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.103139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.114040  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.250197  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.316961  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.603106  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.613879  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.753191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.816593  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.104633  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.114511  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.249966  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.317031  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.603266  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.614360  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.750158  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.817128  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.103974  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.113579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.250363  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.317726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.603262  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.614568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.691764  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:49.753093  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.818136  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.106234  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.117011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.250613  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.317535  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.605091  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.615017  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.751316  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.817578  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.107737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.116527  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.251344  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.319605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.408757  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.716938043s)
	W1002 20:29:51.408854  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.408899  704660 retry.go:31] will retry after 12.661372424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.603144  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.614399  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.750042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.817211  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.104464  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.115011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.250151  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.316858  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.603659  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.614216  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.751315  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.817053  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.104565  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.113559  704660 kapi.go:107] duration metric: took 32.002874096s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:29:53.250114  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.317821  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.603164  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.750146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.820167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.106776  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.316832  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.603001  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.750421  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.817545  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.103737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.250894  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.316949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.603085  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.750103  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.816937  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.103610  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.250374  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.351350  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.603669  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.750222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.816995  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.103711  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.250016  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.317173  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.603412  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.749585  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.817087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.106858  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.250249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.317416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.602677  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.751843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.816975  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.104520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.250328  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.316837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.603027  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.750542  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.817568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.118971  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.260853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.324376  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.603347  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.751070  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.817027  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.116318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.249998  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.318228  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.604526  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.750944  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.818452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.104307  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.254223  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.318397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.604952  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.750890  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.817295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.106126  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.254295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.317579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.603623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.755126  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.818458  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.070964  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:04.103003  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.251061  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.317116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.604016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.750159  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.819498  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.103756  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.249080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.603780  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.751506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.820087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.861050  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790044781s)
	W1002 20:30:05.861139  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:05.861176  704660 retry.go:31] will retry after 17.393091817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:06.103387  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.250507  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.317837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:06.603460  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.750558  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.817614  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.103902  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.250598  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.316702  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.602834  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.754146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.822685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.103768  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.251042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.316848  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.603426  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.750576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.841843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.103764  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.250354  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.331806  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.605318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.750657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.817095  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.103398  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.255408  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.318022  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.603132  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.750403  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.818293  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.104225  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.250993  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.317127  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.603016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.749773  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.817866  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.103202  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.255976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.317255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.604954  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.750466  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.817799  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.121875  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.251358  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.317771  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.603035  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.749741  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.816693  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.103790  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.250141  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.317253  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.603881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.751654  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.834207  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.104408  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.249815  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.316650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.602801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.750009  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.817116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.120769  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.251147  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.352347  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.603722  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.749988  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.817248  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.104049  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.250170  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.317087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.603966  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.751038  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.817272  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.104249  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.254111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.354335  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.603774  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.750446  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.820222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.104228  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.250204  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.317641  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.603235  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.750469  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.817720  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.103219  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.249901  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.354982  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.603352  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.750342  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.816943  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.104120  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.250875  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.316432  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.604183  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.751198  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.851690  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.103478  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.249326  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:22.318236  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.605156  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.750311  704660 kapi.go:107] duration metric: took 1m1.004091859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:30:22.818417  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.103467  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.254761  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:23.317834  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.603470  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.816589  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.105925  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.317505  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.604867  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.802347  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.547475184s)
	W1002 20:30:24.802389  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.802426  704660 retry.go:31] will retry after 27.998098838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.817602  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.106548  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.317082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.603074  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.817303  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.103771  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.316828  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.603416  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.816576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.102651  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.316355  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.603434  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.816609  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.103586  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.318112  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.604364  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.816965  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.103801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.317624  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.603114  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.817415  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.103838  704660 kapi.go:107] duration metric: took 1m11.004121778s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:30:30.316991  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.817460  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.316734  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.817416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.321137  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.818165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:33.318614  704660 kapi.go:107] duration metric: took 1m9.005007455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:30:33.319986  704660 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-991638 cluster.
	I1002 20:30:33.321179  704660 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:30:33.322167  704660 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:30:52.801095  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:30:53.728667  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:30:53.728763  704660 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:30:53.731775  704660 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, registry-creds, volcano, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 20:30:53.733577  704660 addons.go:514] duration metric: took 1m44.75893549s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns registry-creds volcano storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 20:30:53.733631  704660 start.go:246] waiting for cluster config update ...
	I1002 20:30:53.733654  704660 start.go:255] writing updated cluster config ...
	I1002 20:30:53.733956  704660 ssh_runner.go:195] Run: rm -f paused
	I1002 20:30:53.738361  704660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:53.742889  704660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.750373  704660 pod_ready.go:94] pod "coredns-66bc5c9577-wkwnx" is "Ready"
	I1002 20:30:53.750443  704660 pod_ready.go:86] duration metric: took 7.51962ms for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.752616  704660 pod_ready.go:83] waiting for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.757985  704660 pod_ready.go:94] pod "etcd-addons-991638" is "Ready"
	I1002 20:30:53.758011  704660 pod_ready.go:86] duration metric: took 5.320347ms for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.760125  704660 pod_ready.go:83] waiting for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.764465  704660 pod_ready.go:94] pod "kube-apiserver-addons-991638" is "Ready"
	I1002 20:30:53.764491  704660 pod_ready.go:86] duration metric: took 4.30499ms for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.766969  704660 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.142419  704660 pod_ready.go:94] pod "kube-controller-manager-addons-991638" is "Ready"
	I1002 20:30:54.142449  704660 pod_ready.go:86] duration metric: took 375.451024ms for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.342704  704660 pod_ready.go:83] waiting for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.742276  704660 pod_ready.go:94] pod "kube-proxy-xfnp6" is "Ready"
	I1002 20:30:54.742307  704660 pod_ready.go:86] duration metric: took 399.528424ms for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.943143  704660 pod_ready.go:83] waiting for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344485  704660 pod_ready.go:94] pod "kube-scheduler-addons-991638" is "Ready"
	I1002 20:30:55.344522  704660 pod_ready.go:86] duration metric: took 401.35166ms for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344539  704660 pod_ready.go:40] duration metric: took 1.606141213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:55.401584  704660 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:30:55.403167  704660 out.go:179] * Done! kubectl is now configured to use "addons-991638" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 20:41:01 addons-991638 dockerd[1126]: time="2025-10-02T20:41:01.749163826Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:41:01 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:41:01Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 02 20:41:13 addons-991638 dockerd[1126]: time="2025-10-02T20:41:13.770633632Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:41:15 addons-991638 dockerd[1126]: time="2025-10-02T20:41:15.784248657Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:41:33 addons-991638 dockerd[1126]: time="2025-10-02T20:41:33.132738412Z" level=info msg="ignoring event" container=edb7914b91d7308663d8503de30d8912276738cbdb9fe31ff9d74c47413d6c43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:33 addons-991638 dockerd[1126]: time="2025-10-02T20:41:33.158072151Z" level=info msg="ignoring event" container=eebe9684b11cf0ab8b38635ceac8194f5c838fcfb022131468933b02bb63f89c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:33 addons-991638 dockerd[1126]: time="2025-10-02T20:41:33.429672823Z" level=info msg="ignoring event" container=063272a1fd848c9823216e630d28d29740b2f8e7f29845912da0da7968b77e61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:33 addons-991638 dockerd[1126]: time="2025-10-02T20:41:33.518555386Z" level=info msg="ignoring event" container=30e397fdcba62876be3ff9abdf9440f01cbe3f656c0a6210c1b57ace3ba8eb85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.376606866Z" level=info msg="ignoring event" container=714339ab4a604c289525bc973acefe7fffcb203572519af19a2482038365d617 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.378409384Z" level=info msg="ignoring event" container=f33b41dff54c1d375ee589063f7b3de0d449c0aedb871beba2dd1abb5edca4d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.391690259Z" level=info msg="ignoring event" container=26e913322af4f23c1cb2e7fcc05c9551f7ccdde07a30d85fbc307fedbc176858 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.391745890Z" level=info msg="ignoring event" container=087c9272590bb4f8631d0580eede15d7943b2badb2b365dfe26ed5c8f84458da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.404721654Z" level=info msg="ignoring event" container=7fe1ae5b58acc97bdbec5436fc1e8fe167da6028a957c109af2d50bbbc1a9225 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.431155231Z" level=info msg="ignoring event" container=f673a92f38d37ff2b9168a4e722f944abe2606ee02446191ba862dd3cf0c9981 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.449110603Z" level=info msg="ignoring event" container=8c93b919c5b4ba588a9fdab9a89ee295ebb1e286740b1a573de00e1830c32d16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.497555480Z" level=info msg="ignoring event" container=3afb513dbbbaa12382477331013a83d1c666871d014dc4af0475bc94e9c60cf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.714731195Z" level=info msg="ignoring event" container=a9a8d56da7da51e5a7d1de6c89b051559bf650291229fcfe2528a3e85d2c93f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.772266592Z" level=info msg="ignoring event" container=5c0161b7af378d5c743c9d7c6d16a4f83fa100fd93e02031766fef7f29b13e4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:34 addons-991638 dockerd[1126]: time="2025-10-02T20:41:34.795660959Z" level=info msg="ignoring event" container=c80a56727d57ae02a5ca413b8e4b85c333193ca56ff785b12b96463778469d20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:39 addons-991638 dockerd[1126]: time="2025-10-02T20:41:39.766420043Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:42:22 addons-991638 dockerd[1126]: time="2025-10-02T20:42:22.768797554Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:43:44 addons-991638 dockerd[1126]: time="2025-10-02T20:43:44.781321084Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:46:17 addons-991638 dockerd[1126]: time="2025-10-02T20:46:17.853818913Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:46:17 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:46:17Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:46:33 addons-991638 dockerd[1126]: time="2025-10-02T20:46:33.790882430Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	47dac9cf297c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          14 minutes ago      Running             busybox                   0                   bbce1f80c46b4       busybox                                    default
	810d41d3d1f91       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             18 minutes ago      Running             controller                0                   38baae6c52ebc       ingress-nginx-controller-9cc49f96f-g6rz7   ingress-nginx
	3ef8d0f1a48cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   18 minutes ago      Exited              patch                     0                   bf2651aa1dde2       ingress-nginx-admission-patch-z8w27        ingress-nginx
	0612a088672a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   18 minutes ago      Exited              create                    0                   3e77d9aaaed22       ingress-nginx-admission-create-h2p7z       ingress-nginx
	df4c807a71bc6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            19 minutes ago      Running             gadget                    0                   2dffa89109ee8       gadget-gq5qh                               gadget
	dc6958ff54fd4       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         19 minutes ago      Running             minikube-ingress-dns      0                   c8ba98b08e917       kube-ingress-dns-minikube                  kube-system
	7b7e993c0e79f       ba04bb24b9575                                                                                                                19 minutes ago      Running             storage-provisioner       0                   48962134af601       storage-provisioner                        kube-system
	6691f55a72958       138784d87c9c5                                                                                                                19 minutes ago      Running             coredns                   0                   8d8b118e8d1e4       coredns-66bc5c9577-wkwnx                   kube-system
	484f1ee7ca6c4       05baa95f5142d                                                                                                                19 minutes ago      Running             kube-proxy                0                   9057048c41ea1       kube-proxy-xfnp6                           kube-system
	5dc910c8154e4       a1894772a478e                                                                                                                20 minutes ago      Running             etcd                      0                   c6f607736ce1a       etcd-addons-991638                         kube-system
	14517010441e5       b5f57ec6b9867                                                                                                                20 minutes ago      Running             kube-scheduler            0                   45e90d4f82e13       kube-scheduler-addons-991638               kube-system
	aac6857cf97a0       7eb2c6ff0c5a7                                                                                                                20 minutes ago      Running             kube-controller-manager   0                   b61da85a9eb0e       kube-controller-manager-addons-991638      kube-system
	a59993882d357       43911e833d64d                                                                                                                20 minutes ago      Running             kube-apiserver            0                   36c3274520a66       kube-apiserver-addons-991638               kube-system
	
	
	==> controller_ingress [810d41d3d1f9] <==
	I1002 20:30:30.918480       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1002 20:30:30.918697       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-g6rz7"
	I1002 20:30:30.924600       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.934073       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.957588       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:30:30.957659       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1002 20:30:30.957685       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:00.654802       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1002 20:41:00.656479       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1002 20:41:00.660596       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W1002 20:41:00.661083       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1002 20:41:00.666258       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:41:00.669648       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"001d4343-4f08-46c8-902f-8636f6279caa", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2969", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I1002 20:41:00.711843       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:41:00.712787       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:03.995039       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:03.995734       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:41:04.039335       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:41:04.039868       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:07.329886       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:30.926334       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W1002 20:41:30.933118       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:30.933853       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"001d4343-4f08-46c8-902f-8636f6279caa", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3040", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1002 20:41:34.267110       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1002 20:41:37.599754       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [6691f55a7295] <==
	[INFO] 10.244.0.7:47201 - 40794 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002771285s
	[INFO] 10.244.0.7:47201 - 57423 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000191904s
	[INFO] 10.244.0.7:47201 - 29961 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108481s
	[INFO] 10.244.0.7:35713 - 8952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191206s
	[INFO] 10.244.0.7:35713 - 8475 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100112s
	[INFO] 10.244.0.7:33033 - 27442 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128445s
	[INFO] 10.244.0.7:33033 - 27253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087024s
	[INFO] 10.244.0.7:45040 - 19609 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108638s
	[INFO] 10.244.0.7:45040 - 19412 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134558s
	[INFO] 10.244.0.7:37712 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001243118s
	[INFO] 10.244.0.7:37712 - 41124 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001461721s
	[INFO] 10.244.0.7:56368 - 25712 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121651s
	[INFO] 10.244.0.7:56368 - 25933 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087615s
	[INFO] 10.244.0.26:33665 - 7524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225356s
	[INFO] 10.244.0.26:36616 - 9923 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170948s
	[INFO] 10.244.0.26:57364 - 60911 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153093s
	[INFO] 10.244.0.26:49778 - 1221 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113478s
	[INFO] 10.244.0.26:50758 - 6790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157762s
	[INFO] 10.244.0.26:47970 - 38720 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085318s
	[INFO] 10.244.0.26:47839 - 36929 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002380387s
	[INFO] 10.244.0.26:52240 - 40464 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002084794s
	[INFO] 10.244.0.26:58902 - 63295 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001598231s
	[INFO] 10.244.0.26:38424 - 57615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001549484s
	[INFO] 10.244.0.29:36958 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254756s
	[INFO] 10.244.0.29:59866 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178841s
	
	
	==> describe nodes <==
	Name:               addons-991638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-991638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-991638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-991638
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-991638
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:48:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:45:42 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:45:42 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:45:42 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:45:42 +0000   Thu, 02 Oct 2025 20:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-991638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 72f32394f70644d59920eb3322dfa720
	  System UUID:                86ebb095-120f-4f4a-aceb-13d70f79315b
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-gq5qh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-g6rz7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-wkwnx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-addons-991638                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kube-apiserver-addons-991638                250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-addons-991638       200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xfnp6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-991638                100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (3%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node addons-991638 event: Registered Node addons-991638 in Controller
	  Normal   NodeReady                19m                kubelet          Node addons-991638 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5dc910c8154e] <==
	{"level":"warn","ts":"2025-10-02T20:28:59.959804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.946219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.972286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.836192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.866041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.877941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.897162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.933812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.977588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.014404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.063387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.106303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.178294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.193258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.208837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.237195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:58.669143Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1750}
	{"level":"info","ts":"2025-10-02T20:38:58.735928Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1750,"took":"66.108561ms","hash":2247637866,"current-db-size-bytes":10399744,"current-db-size":"10 MB","current-db-size-in-use-bytes":6627328,"current-db-size-in-use":"6.6 MB"}
	{"level":"info","ts":"2025-10-02T20:38:58.735983Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2247637866,"revision":1750,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T20:43:58.675999Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2681}
	{"level":"info","ts":"2025-10-02T20:43:58.699153Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2681,"took":"22.244363ms","hash":2989696203,"current-db-size-bytes":10399744,"current-db-size":"10 MB","current-db-size-in-use-bytes":4075520,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-10-02T20:43:58.699208Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2989696203,"revision":2681,"compact-revision":1750}
	{"level":"info","ts":"2025-10-02T20:48:58.682883Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3341}
	{"level":"info","ts":"2025-10-02T20:48:58.702206Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3341,"took":"18.802496ms","hash":602300225,"current-db-size-bytes":10399744,"current-db-size":"10 MB","current-db-size-in-use-bytes":2424832,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-10-02T20:48:58.702260Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":602300225,"revision":3341,"compact-revision":2681}
	
	
	==> kernel <==
	 20:49:02 up  3:31,  0 user,  load average: 0.21, 0.79, 1.63
	Linux addons-991638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [a59993882d35] <==
	W1002 20:34:18.407803       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1002 20:34:19.216280       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 20:34:19.501373       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 20:34:36.832244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45764: use of closed network connection
	E1002 20:34:37.126713       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45804: use of closed network connection
	E1002 20:34:37.290602       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45832: use of closed network connection
	I1002 20:35:11.208106       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.127.144"}
	I1002 20:39:00.779812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:41:00.657613       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 20:41:00.978461       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.49.99"}
	I1002 20:41:32.870869       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:41:32.870915       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:41:32.907800       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:41:32.908121       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:41:32.918623       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:41:32.918668       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:41:32.937189       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:41:32.937246       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:41:32.972649       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:41:32.973318       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 20:41:33.918721       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 20:41:33.973902       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 20:41:33.985575       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1002 20:41:51.859773       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1002 20:49:00.780871       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [aac6857cf97a] <==
	E1002 20:48:22.903841       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:48:32.632633       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:32.633935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:36.470343       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:36.476725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:37.904987       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:48:38.811367       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:38.812755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:41.920769       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:41.922264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:42.106424       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:42.108041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:43.916123       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:43.917661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:44.160049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:44.161215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:48.267408       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:48.268474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:52.003494       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:48:52.005151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:48:52.905583       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:49:00.398608       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:49:00.400075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:49:00.925277       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:49:00.926438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [484f1ee7ca6c] <==
	I1002 20:29:10.144358       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:29:10.287533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:29:10.388187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:29:10.388220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:29:10.388302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:29:10.427067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:29:10.427117       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:29:10.431953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:29:10.432214       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:29:10.432229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:29:10.433939       1 config.go:200] "Starting service config controller"
	I1002 20:29:10.433950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:29:10.433980       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:29:10.433985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:29:10.433996       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:29:10.434000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:29:10.435854       1 config.go:309] "Starting node config controller"
	I1002 20:29:10.435864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:29:10.435871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:29:10.535044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:29:10.535084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:29:10.535128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14517010441e] <==
	E1002 20:29:00.815378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:29:00.815413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:00.815443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:00.815517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:00.815547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:00.815654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:29:00.815692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:29:00.815742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:29:01.619085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:29:01.626118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:29:01.726859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:01.845808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:01.894559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:01.899233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:29:01.914113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:29:01.933506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:01.941316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:02.102088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:02.108982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:02.129471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:29:02.240337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 20:29:04.797841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 20:45:02.781937       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-991638"
	E1002 20:45:02.782230       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	E1002 20:45:04.003130       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/test-local-path" status="not found"
	
	
	==> kubelet <==
	Oct 02 20:46:43 addons-991638 kubelet[2264]: E1002 20:46:43.549604    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:46:47 addons-991638 kubelet[2264]: E1002 20:46:47.550839    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:46:58 addons-991638 kubelet[2264]: E1002 20:46:58.545420    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:46:59 addons-991638 kubelet[2264]: E1002 20:46:59.554174    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:47:11 addons-991638 kubelet[2264]: E1002 20:47:11.547272    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:47:11 addons-991638 kubelet[2264]: E1002 20:47:11.547764    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:47:14 addons-991638 kubelet[2264]: W1002 20:47:14.681387    2264 logging.go:55] [core] [Channel #74 SubChannel #75]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 02 20:47:22 addons-991638 kubelet[2264]: E1002 20:47:22.547059    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:47:26 addons-991638 kubelet[2264]: E1002 20:47:26.545337    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:47:37 addons-991638 kubelet[2264]: E1002 20:47:37.547164    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:47:40 addons-991638 kubelet[2264]: E1002 20:47:40.544565    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:47:48 addons-991638 kubelet[2264]: E1002 20:47:48.546651    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:47:51 addons-991638 kubelet[2264]: E1002 20:47:51.545345    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:47:55 addons-991638 kubelet[2264]: I1002 20:47:55.553358    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:48:01 addons-991638 kubelet[2264]: E1002 20:48:01.548491    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:48:04 addons-991638 kubelet[2264]: E1002 20:48:04.544932    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:48:15 addons-991638 kubelet[2264]: E1002 20:48:15.544900    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:48:16 addons-991638 kubelet[2264]: E1002 20:48:16.546743    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:48:26 addons-991638 kubelet[2264]: W1002 20:48:26.609506    2264 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 02 20:48:29 addons-991638 kubelet[2264]: E1002 20:48:29.544889    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:48:31 addons-991638 kubelet[2264]: E1002 20:48:31.548885    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:48:43 addons-991638 kubelet[2264]: E1002 20:48:43.546210    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:48:43 addons-991638 kubelet[2264]: E1002 20:48:43.548603    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:48:55 addons-991638 kubelet[2264]: E1002 20:48:55.547230    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:48:56 addons-991638 kubelet[2264]: E1002 20:48:56.544863    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	
	
	==> storage-provisioner [7b7e993c0e79] <==
	W1002 20:48:37.856078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:39.858724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:39.863327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:41.866086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:41.870621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:43.874372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:43.879347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:45.882119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:45.886718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:47.889732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:47.894297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:49.897175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:49.903852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:51.906506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:51.911123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:53.914351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:53.921277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:55.924217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:55.928944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:57.931757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:57.936076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:59.938971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:48:59.943565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:49:01.947529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:49:01.954783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
helpers_test.go:269: (dbg) Run:  kubectl --context addons-991638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27: exit status 1 (129.017388ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991638/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:41:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zlw9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zlw9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-991638
	  Warning  Failed     8m2s                   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m19s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m19s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m19s (x4 over 7m48s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m57s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m57s (x21 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991638/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:35:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sxbjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           13m                   default-scheduler        Successfully assigned default/task-pv-pod to addons-991638
	  Warning  Failed              11m (x4 over 13m)     kubelet                  Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling             10m (x5 over 13m)     kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              10m (x5 over 13m)     kubelet                  Error: ErrImagePull
	  Warning  Failed              10m                   kubelet                  Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff             3m23s (x42 over 13m)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              3m23s (x42 over 13m)  kubelet                  Error: ImagePullBackOff
	  Warning  FailedAttachVolume  81s (x3 over 5m23s)   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-fbed7c32-5fca-4400-98d0-afd7219f7e28" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 5526360b-9fcf-11f0-83a3-b61312c4d597
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6vpp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-p6vpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  4m1s   default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded
	  Warning  FailedScheduling  3m59s  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. not found

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h2p7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z8w27" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable ingress-dns --alsologtostderr -v=1: (1.071007311s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable ingress --alsologtostderr -v=1: (7.713405441s)
--- FAIL: TestAddons/parallel/Ingress (492.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (372.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 20:35:28.059681  703895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 20:35:28.063652  703895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 20:35:28.063679  703895 kapi.go:107] duration metric: took 6.554209ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.566935ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-991638 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-991638 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28] Pending
helpers_test.go:352: "task-pv-pod" [2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-02 20:41:29.880404009 +0000 UTC m=+827.048581619
addons_test.go:567: (dbg) Run:  kubectl --context addons-991638 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-991638 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-991638/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:35:29 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbjm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-sxbjm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-991638
Warning  Failed     4m25s (x4 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m1s (x5 over 5m59s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m1s (x5 over 5m59s)   kubelet            Error: ErrImagePull
Warning  Failed     3m1s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     54s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    43s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-991638 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-991638 logs task-pv-pod -n default: exit status 1 (128.786843ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-991638 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-991638
helpers_test.go:243: (dbg) docker inspect addons-991638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	        "Created": "2025-10-02T20:28:36.164446632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 705058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:28:36.229753591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hosts",
	        "LogPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84-json.log",
	        "Name": "/addons-991638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-991638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-991638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	                "LowerDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-991638",
	                "Source": "/var/lib/docker/volumes/addons-991638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-991638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-991638",
	                "name.minikube.sigs.k8s.io": "addons-991638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "768c8a7310c370a43da0c26c5d036d5e7219705fa051b89897a391452ea6d9a6",
	            "SandboxKey": "/var/run/docker/netns/768c8a7310c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-991638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a0:60:40:27:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05f483610a0fe679b5a4ae4efa1318f553b88c9d264d6b136b55ee1eb76c3654",
	                    "EndpointID": "cbb01d4023b7a4128894d4e3144f6ccc9b60257273c0bfbde032cb7624cd4fb7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-991638",
	                        "ac51530cb591"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-991638 -n addons-991638
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 logs -n 25: (1.203424016s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ start   │ -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p download-docker-039409 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p download-docker-039409                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p binary-mirror-067581 --alsologtostderr --binary-mirror http://127.0.0.1:39571 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p binary-mirror-067581                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ addons  │ disable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ start   │ -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:30 UTC │
	│ addons  │ addons-991638 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ ip      │ addons-991638 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ enable headlamp -p addons-991638 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ addons  │ addons-991638 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ addons  │ addons-991638 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:28:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:28:10.231562  704660 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:10.231700  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.231711  704660 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:10.231716  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.232008  704660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:28:10.232510  704660 out.go:368] Setting JSON to false
	I1002 20:28:10.233399  704660 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11417,"bootTime":1759425473,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:28:10.233494  704660 start.go:140] virtualization:  
	I1002 20:28:10.236719  704660 out.go:179] * [addons-991638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:28:10.240328  704660 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:28:10.240425  704660 notify.go:220] Checking for updates...
	I1002 20:28:10.246179  704660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:28:10.249006  704660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:28:10.251947  704660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:28:10.255157  704660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:28:10.257883  704660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:28:10.260862  704660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:28:10.288692  704660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:28:10.288859  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.345302  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.335898449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.345417  704660 docker.go:318] overlay module found
	I1002 20:28:10.348598  704660 out.go:179] * Using the docker driver based on user configuration
	I1002 20:28:10.351429  704660 start.go:304] selected driver: docker
	I1002 20:28:10.351448  704660 start.go:924] validating driver "docker" against <nil>
	I1002 20:28:10.351462  704660 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:28:10.352198  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.405054  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.396474632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.405212  704660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:28:10.405467  704660 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:28:10.408345  704660 out.go:179] * Using Docker driver with root privileges
	I1002 20:28:10.411100  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:10.411184  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:10.411197  704660 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:28:10.411276  704660 start.go:348] cluster config:
	{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1002 20:28:10.414279  704660 out.go:179] * Starting "addons-991638" primary control-plane node in "addons-991638" cluster
	I1002 20:28:10.417120  704660 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:28:10.419910  704660 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:28:10.422725  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:10.422776  704660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:28:10.422791  704660 cache.go:58] Caching tarball of preloaded images
	I1002 20:28:10.422838  704660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:28:10.422873  704660 preload.go:233] Found /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:28:10.422902  704660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 20:28:10.423255  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:10.423397  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json: {Name:mk2f26d255d9ea8bd15922b678de4d5774eef391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:10.438348  704660 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:28:10.438495  704660 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:28:10.438518  704660 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:28:10.438524  704660 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:28:10.438532  704660 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:28:10.438537  704660 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:28:28.104678  704660 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:28:28.104717  704660 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:28:28.104748  704660 start.go:360] acquireMachinesLock for addons-991638: {Name:mk53aeb56b1e9fb052ee11df133ba143769d5932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:28:28.104882  704660 start.go:364] duration metric: took 113.831µs to acquireMachinesLock for "addons-991638"
	I1002 20:28:28.104912  704660 start.go:93] Provisioning new machine with config: &{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:28:28.104985  704660 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:28:28.108517  704660 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:28:28.108807  704660 start.go:159] libmachine.API.Create for "addons-991638" (driver="docker")
	I1002 20:28:28.108861  704660 client.go:168] LocalClient.Create starting
	I1002 20:28:28.108989  704660 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem
	I1002 20:28:28.920995  704660 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem
	I1002 20:28:29.719304  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:28:29.735220  704660 cli_runner.go:211] docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:28:29.735320  704660 network_create.go:284] running [docker network inspect addons-991638] to gather additional debugging logs...
	I1002 20:28:29.735342  704660 cli_runner.go:164] Run: docker network inspect addons-991638
	W1002 20:28:29.756033  704660 cli_runner.go:211] docker network inspect addons-991638 returned with exit code 1
	I1002 20:28:29.756065  704660 network_create.go:287] error running [docker network inspect addons-991638]: docker network inspect addons-991638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-991638 not found
	I1002 20:28:29.756079  704660 network_create.go:289] output of [docker network inspect addons-991638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-991638 not found
	
	** /stderr **
	I1002 20:28:29.756173  704660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:29.772458  704660 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d5e320}
	I1002 20:28:29.772498  704660 network_create.go:124] attempt to create docker network addons-991638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:28:29.772554  704660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-991638 addons-991638
	I1002 20:28:29.829752  704660 network_create.go:108] docker network addons-991638 192.168.49.0/24 created
	I1002 20:28:29.829781  704660 kic.go:121] calculated static IP "192.168.49.2" for the "addons-991638" container
	I1002 20:28:29.829879  704660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:28:29.847391  704660 cli_runner.go:164] Run: docker volume create addons-991638 --label name.minikube.sigs.k8s.io=addons-991638 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:28:29.864875  704660 oci.go:103] Successfully created a docker volume addons-991638
	I1002 20:28:29.864995  704660 cli_runner.go:164] Run: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:28:32.119965  704660 cli_runner.go:217] Completed: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.254927204s)
	I1002 20:28:32.120005  704660 oci.go:107] Successfully prepared a docker volume addons-991638
	I1002 20:28:32.120024  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:32.120045  704660 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:28:32.120115  704660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:28:36.088209  704660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.968050647s)
	I1002 20:28:36.088240  704660 kic.go:203] duration metric: took 3.968193754s to extract preloaded images to volume ...
	W1002 20:28:36.088386  704660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:28:36.088487  704660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:28:36.149550  704660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-991638 --name addons-991638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-991638 --network addons-991638 --ip 192.168.49.2 --volume addons-991638:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:28:36.432531  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Running}}
	I1002 20:28:36.459147  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:36.484423  704660 cli_runner.go:164] Run: docker exec addons-991638 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:28:36.539034  704660 oci.go:144] the created container "addons-991638" has a running status.
	I1002 20:28:36.539068  704660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa...
	I1002 20:28:37.262683  704660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:28:37.288911  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.309985  704660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:28:37.310010  704660 kic_runner.go:114] Args: [docker exec --privileged addons-991638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:28:37.369831  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.391035  704660 machine.go:93] provisionDockerMachine start ...
	I1002 20:28:37.391126  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.411223  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.411540  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.411549  704660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:28:37.553086  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.553108  704660 ubuntu.go:182] provisioning hostname "addons-991638"
	I1002 20:28:37.553169  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.575369  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.575674  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.575686  704660 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-991638 && echo "addons-991638" | sudo tee /etc/hostname
	I1002 20:28:37.721568  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.721652  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.747484  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.747789  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.747811  704660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-991638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-991638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-991638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:28:37.877526  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:28:37.877550  704660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-702037/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-702037/.minikube}
	I1002 20:28:37.877573  704660 ubuntu.go:190] setting up certificates
	I1002 20:28:37.877582  704660 provision.go:84] configureAuth start
	I1002 20:28:37.877644  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:37.894231  704660 provision.go:143] copyHostCerts
	I1002 20:28:37.894324  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem (1078 bytes)
	I1002 20:28:37.894448  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem (1123 bytes)
	I1002 20:28:37.894507  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem (1675 bytes)
	I1002 20:28:37.894559  704660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem org=jenkins.addons-991638 san=[127.0.0.1 192.168.49.2 addons-991638 localhost minikube]
	I1002 20:28:38.951532  704660 provision.go:177] copyRemoteCerts
	I1002 20:28:38.951598  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:28:38.951639  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:38.968871  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:39.069322  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:28:39.087473  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:28:39.106442  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:28:39.125193  704660 provision.go:87] duration metric: took 1.247587619s to configureAuth
	I1002 20:28:39.125222  704660 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:28:39.125407  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:28:39.125491  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.145970  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.146282  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.146299  704660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 20:28:39.282106  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 20:28:39.282131  704660 ubuntu.go:71] root file system type: overlay
	I1002 20:28:39.282235  704660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 20:28:39.282310  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.300258  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.300556  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.300651  704660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 20:28:39.442933  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 20:28:39.443023  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.460361  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.460680  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.460703  704660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 20:28:40.382609  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-02 20:28:39.437593143 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 20:28:40.382680  704660 machine.go:96] duration metric: took 2.991625077s to provisionDockerMachine
	I1002 20:28:40.382776  704660 client.go:171] duration metric: took 12.273900895s to LocalClient.Create
	I1002 20:28:40.382819  704660 start.go:167] duration metric: took 12.27401677s to libmachine.API.Create "addons-991638"
	I1002 20:28:40.382841  704660 start.go:293] postStartSetup for "addons-991638" (driver="docker")
	I1002 20:28:40.382863  704660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:28:40.382961  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:28:40.383028  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.400184  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.497649  704660 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:28:40.501057  704660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:28:40.501087  704660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:28:40.501099  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/addons for local assets ...
	I1002 20:28:40.501170  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/files for local assets ...
	I1002 20:28:40.501198  704660 start.go:296] duration metric: took 118.339458ms for postStartSetup
	I1002 20:28:40.501542  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.519025  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:40.519322  704660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:28:40.519374  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.535401  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.626314  704660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:28:40.631258  704660 start.go:128] duration metric: took 12.526256292s to createHost
	I1002 20:28:40.631280  704660 start.go:83] releasing machines lock for "addons-991638", held for 12.526385541s
	I1002 20:28:40.631365  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.648027  704660 ssh_runner.go:195] Run: cat /version.json
	I1002 20:28:40.648051  704660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:28:40.648079  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.648112  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.671874  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.672768  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.765471  704660 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:40.858838  704660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:28:40.863487  704660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:28:40.863561  704660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:28:40.891689  704660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:28:40.891716  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:40.891748  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:40.891847  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:40.905197  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:28:40.914585  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:28:40.923483  704660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:28:40.923613  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:28:40.932751  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.941795  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:28:40.950514  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.959583  704660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:28:40.967941  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:28:40.976883  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:28:40.986149  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:28:40.995305  704660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:28:41.004003  704660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:28:41.012739  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.128237  704660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:28:41.231332  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:41.231381  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:41.231441  704660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 20:28:41.246943  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.259982  704660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:28:41.299529  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.312040  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:28:41.325475  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:41.339679  704660 ssh_runner.go:195] Run: which cri-dockerd
	I1002 20:28:41.343375  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 20:28:41.351275  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 20:28:41.364332  704660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 20:28:41.484463  704660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 20:28:41.601245  704660 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 20:28:41.601360  704660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 20:28:41.614352  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 20:28:41.626868  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.733314  704660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 20:28:42.111293  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:28:42.128509  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 20:28:42.145965  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.163934  704660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 20:28:42.308063  704660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 20:28:42.433113  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.552919  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 20:28:42.569022  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 20:28:42.582319  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.699949  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 20:28:42.769589  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.783022  704660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 20:28:42.783145  704660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 20:28:42.787107  704660 start.go:563] Will wait 60s for crictl version
	I1002 20:28:42.787194  704660 ssh_runner.go:195] Run: which crictl
	I1002 20:28:42.790829  704660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:28:42.815945  704660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 20:28:42.816103  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.842953  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.874688  704660 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 20:28:42.874787  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:42.890887  704660 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:28:42.895320  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:42.906278  704660 kubeadm.go:883] updating cluster {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:28:42.906402  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:42.906467  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.925708  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.925733  704660 docker.go:621] Images already preloaded, skipping extraction
	I1002 20:28:42.925801  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.945361  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.945383  704660 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:28:42.945393  704660 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1002 20:28:42.945504  704660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-991638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:28:42.945582  704660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 20:28:42.996799  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:42.996828  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:42.996844  704660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:28:42.996865  704660 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-991638 NodeName:addons-991638 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:28:42.996983  704660 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-991638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:28:42.997055  704660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:28:43.006552  704660 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:28:43.006645  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:28:43.015646  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1002 20:28:43.030545  704660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:28:43.044123  704660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 20:28:43.057931  704660 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:28:43.061696  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:43.072014  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:43.187259  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:28:43.203829  704660 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638 for IP: 192.168.49.2
	I1002 20:28:43.203899  704660 certs.go:195] generating shared ca certs ...
	I1002 20:28:43.203929  704660 certs.go:227] acquiring lock for ca certs: {Name:mk80feb87d46a3c61de00b383dd8ac7fd2e19095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:43.204734  704660 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key
	I1002 20:28:44.637131  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt ...
	I1002 20:28:44.637163  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt: {Name:mkb6d8319d3a74d42b081683e314c37e53586717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637366  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key ...
	I1002 20:28:44.637379  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key: {Name:mkbd44d267c3b1cf1fed0a906ac7bf46743d8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637481  704660 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key
	I1002 20:28:45.683223  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt ...
	I1002 20:28:45.683262  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt: {Name:mkf2892474e0dfa857be215b552060af628196ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683490  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key ...
	I1002 20:28:45.683507  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key: {Name:mkb3e427bf0a6e7ceb613b926e3c90e07409da52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683588  704660 certs.go:257] generating profile certs ...
	I1002 20:28:45.683654  704660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key
	I1002 20:28:45.683671  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt with IP's: []
	I1002 20:28:46.046463  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt ...
	I1002 20:28:46.046497  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: {Name:mk51f9d8abe3f7006e638458dae2df70cdaa936a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046676  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key ...
	I1002 20:28:46.046691  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key: {Name:mke5acc604e8c4ff883546df37d116f9c766e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046773  704660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b
	I1002 20:28:46.046795  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:28:46.569113  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b ...
	I1002 20:28:46.569145  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b: {Name:mk40a7d58b6523a132d065d0266597e722b3762d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.569955  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b ...
	I1002 20:28:46.569974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b: {Name:mkbe601cfd4f3105ca705f6de8b8f9d490a11ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.570609  704660 certs.go:382] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt
	I1002 20:28:46.570694  704660 certs.go:386] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key
	I1002 20:28:46.570747  704660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key
	I1002 20:28:46.570767  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt with IP's: []
	I1002 20:28:46.754716  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt ...
	I1002 20:28:46.754747  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt: {Name:mkd0f46ec8109fe64dda020f7c270bd3d9dd6bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.754958  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key ...
	I1002 20:28:46.754974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key: {Name:mk7b62b96428d619ab88e3c0c6972f37ee378b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.755195  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:28:46.755238  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:28:46.755269  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:28:46.755294  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem (1675 bytes)
	I1002 20:28:46.755827  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:28:46.773406  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:28:46.790954  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:28:46.807835  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:28:46.825141  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:28:46.842372  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:28:46.860238  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:28:46.877776  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:28:46.894424  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:28:46.911754  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:28:46.925117  704660 ssh_runner.go:195] Run: openssl version
	I1002 20:28:46.931161  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:28:46.940887  704660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945128  704660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945198  704660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.986089  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:28:46.995228  704660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:28:46.998614  704660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:28:46.998670  704660 kubeadm.go:400] StartCluster: {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:28:46.998801  704660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:28:47.017260  704660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:28:47.024934  704660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:28:47.032572  704660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:28:47.032637  704660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:28:47.040541  704660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:28:47.040563  704660 kubeadm.go:157] found existing configuration files:
	
	I1002 20:28:47.040632  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:28:47.048232  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:28:47.048324  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:28:47.055897  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:28:47.063851  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:28:47.063972  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:28:47.071920  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.079791  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:28:47.079884  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.087482  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:28:47.095260  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:28:47.095325  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:28:47.102743  704660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:28:47.143961  704660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:28:47.144023  704660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:28:47.171162  704660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:28:47.171292  704660 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:28:47.171362  704660 kubeadm.go:318] OS: Linux
	I1002 20:28:47.171451  704660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:28:47.171534  704660 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:28:47.171621  704660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:28:47.171707  704660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:28:47.171790  704660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:28:47.171876  704660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:28:47.171956  704660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:28:47.172038  704660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:28:47.172128  704660 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:28:47.235837  704660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:28:47.235957  704660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:28:47.236052  704660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:28:47.257841  704660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:28:47.262676  704660 out.go:252]   - Generating certificates and keys ...
	I1002 20:28:47.262771  704660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:28:47.262845  704660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:28:47.756271  704660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:28:48.584093  704660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:28:48.888267  704660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:28:49.699713  704660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:28:50.057163  704660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:28:50.057649  704660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.779363  704660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:28:50.779734  704660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.900170  704660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:28:51.497655  704660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:28:51.954519  704660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:28:51.954818  704660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:28:53.080191  704660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:28:53.266970  704660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:28:53.973649  704660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:28:54.725487  704660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:28:55.109834  704660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:28:55.110186  704660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:28:55.113467  704660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:28:55.117318  704660 out.go:252]   - Booting up control plane ...
	I1002 20:28:55.117435  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:28:55.117518  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:28:55.118060  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:28:55.141929  704660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:28:55.142323  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:28:55.150629  704660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:28:55.150957  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:28:55.151008  704660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:28:55.286296  704660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:28:55.286428  704660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:28:56.789783  704660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501225822s
	I1002 20:28:56.789937  704660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:28:56.790047  704660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:28:56.790165  704660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:28:56.790264  704660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:28:58.802179  704660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.012574504s
	I1002 20:29:00.806811  704660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.017417752s
	I1002 20:29:02.791474  704660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002021418s
	I1002 20:29:02.814104  704660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:29:02.827699  704660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:29:02.846247  704660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:29:02.846862  704660 kubeadm.go:318] [mark-control-plane] Marking the node addons-991638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:29:02.861722  704660 kubeadm.go:318] [bootstrap-token] Using token: z0jdd4.ysfi1vhms678tv6t
	I1002 20:29:02.864796  704660 out.go:252]   - Configuring RBAC rules ...
	I1002 20:29:02.864929  704660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:29:02.869885  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:29:02.888805  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:29:02.892893  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:29:02.897307  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:29:02.902794  704660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:29:03.198711  704660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:29:03.626604  704660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:29:04.197660  704660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:29:04.199081  704660 kubeadm.go:318] 
	I1002 20:29:04.199168  704660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:29:04.199174  704660 kubeadm.go:318] 
	I1002 20:29:04.199283  704660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:29:04.199304  704660 kubeadm.go:318] 
	I1002 20:29:04.199332  704660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:29:04.199403  704660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:29:04.199462  704660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:29:04.199470  704660 kubeadm.go:318] 
	I1002 20:29:04.199544  704660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:29:04.199559  704660 kubeadm.go:318] 
	I1002 20:29:04.199633  704660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:29:04.199648  704660 kubeadm.go:318] 
	I1002 20:29:04.199708  704660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:29:04.199805  704660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:29:04.199891  704660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:29:04.199904  704660 kubeadm.go:318] 
	I1002 20:29:04.199999  704660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:29:04.200089  704660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:29:04.200099  704660 kubeadm.go:318] 
	I1002 20:29:04.200207  704660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200351  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 \
	I1002 20:29:04.200382  704660 kubeadm.go:318] 	--control-plane 
	I1002 20:29:04.200390  704660 kubeadm.go:318] 
	I1002 20:29:04.200503  704660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:29:04.200516  704660 kubeadm.go:318] 
	I1002 20:29:04.200612  704660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200736  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 
	I1002 20:29:04.203776  704660 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:29:04.204016  704660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:29:04.204131  704660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:29:04.204150  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:29:04.204164  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:29:04.207498  704660 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:29:04.210410  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:29:04.217868  704660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:29:04.235604  704660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:29:04.235701  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.235739  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-991638 minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-991638 minikube.k8s.io/primary=true
	I1002 20:29:04.254399  704660 ops.go:34] apiserver oom_adj: -16
	I1002 20:29:04.369134  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.869740  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.370081  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.870196  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.369731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.870115  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.369228  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.869851  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.369279  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.869731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.972720  704660 kubeadm.go:1113] duration metric: took 4.737085496s to wait for elevateKubeSystemPrivileges
	I1002 20:29:08.972751  704660 kubeadm.go:402] duration metric: took 21.974085235s to StartCluster
	I1002 20:29:08.972769  704660 settings.go:142] acquiring lock: {Name:mk05279472feb5063a5c2285eba6fd6d59490060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.972884  704660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:29:08.973255  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/kubeconfig: {Name:mk451cd073bc3a44904ff8d0351225145e56e5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.973483  704660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:29:08.973596  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:29:08.973840  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.973881  704660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:29:08.973962  704660 addons.go:69] Setting yakd=true in profile "addons-991638"
	I1002 20:29:08.973977  704660 addons.go:238] Setting addon yakd=true in "addons-991638"
	I1002 20:29:08.973998  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.974491  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.974944  704660 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-991638"
	I1002 20:29:08.974969  704660 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-991638"
	I1002 20:29:08.974993  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.975410  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.975798  704660 addons.go:69] Setting cloud-spanner=true in profile "addons-991638"
	I1002 20:29:08.975820  704660 addons.go:238] Setting addon cloud-spanner=true in "addons-991638"
	I1002 20:29:08.975844  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.976228  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.978568  704660 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-991638"
	I1002 20:29:08.978639  704660 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:08.978669  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.979258  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.980070  704660 out.go:179] * Verifying Kubernetes components...
	I1002 20:29:08.980299  704660 addons.go:69] Setting registry-creds=true in profile "addons-991638"
	I1002 20:29:08.980320  704660 addons.go:238] Setting addon registry-creds=true in "addons-991638"
	I1002 20:29:08.980348  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.980878  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984024  704660 addons.go:69] Setting storage-provisioner=true in profile "addons-991638"
	I1002 20:29:08.984111  704660 addons.go:238] Setting addon storage-provisioner=true in "addons-991638"
	I1002 20:29:08.985311  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984905  704660 addons.go:69] Setting default-storageclass=true in profile "addons-991638"
	I1002 20:29:08.986095  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-991638"
	I1002 20:29:08.986385  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.997940  704660 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-991638"
	I1002 20:29:08.997997  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-991638"
	I1002 20:29:08.998330  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984914  704660 addons.go:69] Setting gcp-auth=true in profile "addons-991638"
	I1002 20:29:08.998967  704660 mustload.go:65] Loading cluster: addons-991638
	I1002 20:29:08.999148  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.999394  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984921  704660 addons.go:69] Setting ingress=true in profile "addons-991638"
	I1002 20:29:09.012451  704660 addons.go:238] Setting addon ingress=true in "addons-991638"
	I1002 20:29:09.012506  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.012981  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.017454  704660 addons.go:69] Setting volcano=true in profile "addons-991638"
	I1002 20:29:09.017490  704660 addons.go:238] Setting addon volcano=true in "addons-991638"
	I1002 20:29:09.017527  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.018061  704660 addons.go:69] Setting volumesnapshots=true in profile "addons-991638"
	I1002 20:29:09.018133  704660 addons.go:238] Setting addon volumesnapshots=true in "addons-991638"
	I1002 20:29:09.018173  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984925  704660 addons.go:69] Setting ingress-dns=true in profile "addons-991638"
	I1002 20:29:09.025533  704660 addons.go:238] Setting addon ingress-dns=true in "addons-991638"
	I1002 20:29:09.025587  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.026063  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.044490  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984928  704660 addons.go:69] Setting inspektor-gadget=true in profile "addons-991638"
	I1002 20:29:09.049039  704660 addons.go:238] Setting addon inspektor-gadget=true in "addons-991638"
	I1002 20:29:09.049079  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.049563  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984931  704660 addons.go:69] Setting metrics-server=true in profile "addons-991638"
	I1002 20:29:09.074105  704660 addons.go:238] Setting addon metrics-server=true in "addons-991638"
	I1002 20:29:09.074149  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.075253  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984945  704660 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-991638"
	I1002 20:29:09.101041  704660 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-991638"
	I1002 20:29:09.101085  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.101634  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.134221  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984949  704660 addons.go:69] Setting registry=true in profile "addons-991638"
	I1002 20:29:09.134685  704660 addons.go:238] Setting addon registry=true in "addons-991638"
	I1002 20:29:09.134721  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.135150  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.166068  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.985251  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:29:09.210573  704660 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:29:09.222512  704660 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:29:09.228645  704660 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:09.228697  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:29:09.228802  704660 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:09.228834  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:29:09.228917  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.232353  704660 addons.go:238] Setting addon default-storageclass=true in "addons-991638"
	I1002 20:29:09.232403  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.232836  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.240129  704660 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:29:09.228818  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252033  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:29:09.281457  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.289194  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.276652  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:29:09.291469  704660 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:29:09.291547  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252086  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317140  704660 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-991638"
	I1002 20:29:09.317269  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317905  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.321130  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:29:09.324328  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:29:09.329618  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:29:09.329846  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:09.329862  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:29:09.329924  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.330072  704660 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:29:09.332483  704660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:09.332506  704660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:29:09.332556  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.352512  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.359187  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:29:09.364275  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:29:09.364559  704660 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:09.364575  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:29:09.364638  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.375690  704660 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 20:29:09.375940  704660 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:29:09.386355  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:09.386396  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:29:09.386476  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.402265  704660 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 20:29:09.412773  704660 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 20:29:09.418587  704660 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:09.418666  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 20:29:09.418775  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.419320  704660 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:29:09.423729  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:29:09.423757  704660 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:29:09.423846  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.441567  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:29:09.442010  704660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:29:09.447860  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:29:09.451279  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:29:09.453459  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:09.453480  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:29:09.453561  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.455757  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:29:09.455822  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:29:09.455914  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.465113  704660 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:29:09.469477  704660 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:29:09.469509  704660 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:29:09.469576  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.479455  704660 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 20:29:09.482830  704660 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:29:09.487219  704660 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:29:09.487285  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:29:09.487386  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.498491  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.506413  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:29:09.509491  704660 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:29:09.509670  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:29:09.509687  704660 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:29:09.509759  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.515326  704660 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:09.515349  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:29:09.515413  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.556794  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.592629  704660 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:29:09.595721  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.601773  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.604845  704660 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:29:09.607957  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:09.607982  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:29:09.608078  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.639621  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.660885  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.690935  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.696294  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.717153  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.743500  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.746463  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.751738  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.757583  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.764350  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.764394  704660 retry.go:31] will retry after 315.573784ms: ssh: handshake failed: EOF
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.784428  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.784456  704660 retry.go:31] will retry after 304.179518ms: ssh: handshake failed: EOF
	I1002 20:29:09.898194  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:29:09.936055  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:29:10.111040  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:10.111126  704660 retry.go:31] will retry after 465.641139ms: ssh: handshake failed: EOF
	I1002 20:29:10.668679  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:29:10.668702  704660 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:29:10.797217  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:29:10.797297  704660 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:29:10.865274  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:10.881693  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:29:10.881716  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:29:10.886079  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:10.921408  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:10.943803  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:29:10.943828  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:29:10.978775  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:29:10.978805  704660 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:29:10.994840  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:11.011037  704660 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:29:11.011073  704660 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:29:11.030493  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:11.032022  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:29:11.032044  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:29:11.035800  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:29:11.035830  704660 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:29:11.071721  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:11.091723  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:11.106681  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:11.145109  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:29:11.145139  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:29:11.148280  704660 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.148309  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:29:11.202167  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.202196  704660 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:29:11.305203  704660 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.305232  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:29:11.316393  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:11.329281  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:29:11.329312  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:29:11.355129  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:11.398833  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.398857  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:29:11.409753  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:29:11.409781  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:29:11.426941  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.428747  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.489773  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:29:11.489841  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:29:11.494567  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.542853  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.615125  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:29:11.615198  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:29:11.677959  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:29:11.678040  704660 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:29:11.863554  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:29:11.863639  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:29:12.043926  704660 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.044010  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:29:12.200094  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:29:12.200165  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:29:12.470826  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.509295  704660 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.573157378s)
	I1002 20:29:12.509455  704660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.611238205s)
	I1002 20:29:12.509528  704660 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:29:12.511038  704660 node_ready.go:35] waiting up to 6m0s for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515289  704660 node_ready.go:49] node "addons-991638" is "Ready"
	I1002 20:29:12.515313  704660 node_ready.go:38] duration metric: took 3.935549ms for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515328  704660 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:29:12.515389  704660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:29:12.613485  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:29:12.613555  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:29:12.794628  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.92930886s)
	I1002 20:29:13.024378  704660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-991638" context rescaled to 1 replicas
	I1002 20:29:13.094487  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:29:13.094553  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:29:13.666276  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:29:13.666353  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:29:14.220703  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:14.220782  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:29:14.633137  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:16.743396  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:29:16.743479  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:16.772705  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:17.648047  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:29:17.758402  704660 addons.go:238] Setting addon gcp-auth=true in "addons-991638"
	I1002 20:29:17.758451  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:17.758915  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:17.782244  704660 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:29:17.782296  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:17.815647  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:19.091966  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.205841491s)
	I1002 20:29:19.092058  704660 addons.go:479] Verifying addon ingress=true in "addons-991638"
	I1002 20:29:19.092330  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.170806627s)
	I1002 20:29:19.092745  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.097877392s)
	I1002 20:29:19.092800  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.06227576s)
	I1002 20:29:19.095718  704660 out.go:179] * Verifying ingress addon...
	I1002 20:29:19.099717  704660 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:29:19.283832  704660 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:29:19.283853  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:19.648674  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.108386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.606825  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.102257  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.030489478s)
	I1002 20:29:21.102331  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.01058393s)
	I1002 20:29:21.102523  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995812674s)
	I1002 20:29:21.102576  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.786160691s)
	I1002 20:29:21.102665  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.747515739s)
	I1002 20:29:21.102736  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.675772832s)
	W1002 20:29:21.102757  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102773  704660 retry.go:31] will retry after 165.427061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102843  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.674073931s)
	I1002 20:29:21.102857  704660 addons.go:479] Verifying addon metrics-server=true in "addons-991638"
	I1002 20:29:21.102896  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.608257689s)
	I1002 20:29:21.102908  704660 addons.go:479] Verifying addon registry=true in "addons-991638"
	I1002 20:29:21.103092  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.560138876s)
	I1002 20:29:21.103416  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.632501338s)
	W1002 20:29:21.103659  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.103480  704660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.588080107s)
	I1002 20:29:21.103716  704660 api_server.go:72] duration metric: took 12.130202438s to wait for apiserver process to appear ...
	I1002 20:29:21.103723  704660 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:29:21.103737  704660 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:29:21.104569  704660 retry.go:31] will retry after 131.465799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.106517  704660 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-991638 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:29:21.106623  704660 out.go:179] * Verifying registry addon...
	I1002 20:29:21.110687  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:29:21.128889  704660 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:29:21.146707  704660 api_server.go:141] control plane version: v1.34.1
	I1002 20:29:21.146750  704660 api_server.go:131] duration metric: took 43.020902ms to wait for apiserver health ...
	I1002 20:29:21.146760  704660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:29:21.231778  704660 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:29:21.231803  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.232570  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.232990  704660 system_pods.go:59] 16 kube-system pods found
	I1002 20:29:21.233027  704660 system_pods.go:61] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233037  704660 system_pods.go:61] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233049  704660 system_pods.go:61] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.233054  704660 system_pods.go:61] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.233058  704660 system_pods.go:61] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.233072  704660 system_pods.go:61] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.233077  704660 system_pods.go:61] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.233082  704660 system_pods.go:61] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.233093  704660 system_pods.go:61] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.233100  704660 system_pods.go:61] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.233110  704660 system_pods.go:61] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.233117  704660 system_pods.go:61] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.233126  704660 system_pods.go:61] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.233138  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233145  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233152  704660 system_pods.go:61] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.233159  704660 system_pods.go:74] duration metric: took 86.393348ms to wait for pod list to return data ...
	I1002 20:29:21.233171  704660 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:29:21.236551  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:21.269271  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:29:21.290207  704660 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:29:21.375005  704660 default_sa.go:45] found service account: "default"
	I1002 20:29:21.375031  704660 default_sa.go:55] duration metric: took 141.854284ms for default service account to be created ...
	I1002 20:29:21.375042  704660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:29:21.403678  704660 system_pods.go:86] 17 kube-system pods found
	I1002 20:29:21.403714  704660 system_pods.go:89] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403724  704660 system_pods.go:89] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403730  704660 system_pods.go:89] "csi-hostpath-attacher-0" [e1b49a9e-cc2c-43ad-a104-7517ae3b9b71] Pending
	I1002 20:29:21.403736  704660 system_pods.go:89] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.403740  704660 system_pods.go:89] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.403744  704660 system_pods.go:89] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.403751  704660 system_pods.go:89] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.403755  704660 system_pods.go:89] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.403760  704660 system_pods.go:89] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.403767  704660 system_pods.go:89] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.403774  704660 system_pods.go:89] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.403789  704660 system_pods.go:89] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.403795  704660 system_pods.go:89] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.403857  704660 system_pods.go:89] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.403871  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403878  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403881  704660 system_pods.go:89] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.403889  704660 system_pods.go:126] duration metric: took 28.840694ms to wait for k8s-apps to be running ...
	I1002 20:29:21.403905  704660 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:29:21.403962  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:29:21.633145  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.633273  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.719440  704660 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.937165373s)
	I1002 20:29:21.723512  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:21.737044  704660 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:29:21.739233  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.105985614s)
	I1002 20:29:21.739269  704660 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:21.741380  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:29:21.741407  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:29:21.741519  704660 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:29:21.746220  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:29:21.749098  704660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:29:21.749124  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:21.885645  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:29:21.885723  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:29:21.999241  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:21.999306  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:29:22.103650  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.107641  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:22.115675  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.249646  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:22.603835  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.614145  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.750221  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.104878  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.113990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.250841  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.614664  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.616397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.754661  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.028432  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.791823015s)
	I1002 20:29:24.104308  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.114739  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.250667  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.302476  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.194753737s)
	I1002 20:29:24.302845  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.033536467s)
	W1002 20:29:24.302913  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302985  704660 retry.go:31] will retry after 309.54405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302944  704660 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.89841849s)
	I1002 20:29:24.303063  704660 system_svc.go:56] duration metric: took 2.899157354s WaitForService to wait for kubelet
	I1002 20:29:24.303086  704660 kubeadm.go:586] duration metric: took 15.329570576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:29:24.303134  704660 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:29:24.305338  704660 addons.go:479] Verifying addon gcp-auth=true in "addons-991638"
	I1002 20:29:24.308194  704660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:29:24.308224  704660 node_conditions.go:123] node cpu capacity is 2
	I1002 20:29:24.308238  704660 node_conditions.go:105] duration metric: took 5.087392ms to run NodePressure ...
	I1002 20:29:24.308251  704660 start.go:241] waiting for startup goroutines ...
	I1002 20:29:24.310445  704660 out.go:179] * Verifying gcp-auth addon...
	I1002 20:29:24.313602  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:29:24.325918  704660 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:29:24.325990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:24.603413  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.613652  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:24.613983  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.750444  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.817604  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.103685  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.118065  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.249976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.317010  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.603841  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.613949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.750092  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.817987  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.957381  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.343690162s)
	W1002 20:29:25.957546  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:25.957590  704660 retry.go:31] will retry after 334.218122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:26.104386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.114584  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.250032  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.292352  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:26.317525  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:26.604047  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.613938  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.750249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.817111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.103343  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.113575  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.250109  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.317078  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.444622  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.152189827s)
	W1002 20:29:27.444714  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.444752  704660 retry.go:31] will retry after 546.51266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.604261  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.614167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.749521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.817914  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.992173  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:28.104304  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.114156  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.249193  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.317122  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:28.603290  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.614437  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.749750  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.817014  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:29:28.983712  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:28.983784  704660 retry.go:31] will retry after 1.260023447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:29.103350  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.114454  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.249644  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.317067  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:29.602986  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.613726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.749688  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.816730  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.103822  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.114057  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.244571  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:30.250615  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.603619  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.614026  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.749853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.816479  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.103600  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.114190  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.249506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.298691  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.054084159s)
	W1002 20:29:31.298721  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.298741  704660 retry.go:31] will retry after 1.646308182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.316219  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.605040  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.631189  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.750015  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.817796  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.103881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.116470  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.250021  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.317307  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.604391  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.614775  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.750540  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.816630  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.946032  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:33.104871  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.115283  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.250183  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.317668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:33.603187  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.614529  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.749647  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.817102  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.018177  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.072106262s)
	W1002 20:29:34.018217  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.018266  704660 retry.go:31] will retry after 2.385257575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.104529  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.114836  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.250452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.318843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.603645  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.614617  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.750082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.817533  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.107703  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.114893  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.251718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.603848  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.613657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.750110  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.816940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.103942  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.113970  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.250099  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.316846  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.404147  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:36.604239  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.613891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.750685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.818255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.103487  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.114495  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.250302  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.316913  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.595720  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.191535427s)
	W1002 20:29:37.595768  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.595789  704660 retry.go:31] will retry after 3.1319796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.604699  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.613531  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.750080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.820120  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.135110  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.135518  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.251304  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.317891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.603678  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.614208  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.750230  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.817842  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.110039  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.123577  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.253100  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.320981  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.606978  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.619008  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.757188  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.821029  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.104171  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.114472  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.250599  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.316853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.603622  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.614494  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.728573  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:40.750499  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.817269  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.103718  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.113793  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.251438  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.323113  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.606477  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.615889  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.749940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.819471  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.104623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.115622  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.203580  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.474960878s)
	W1002 20:29:42.203682  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.203776  704660 retry.go:31] will retry after 7.48710054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.250824  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.317605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.603374  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.614191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.750400  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.816718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.103173  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.114483  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.249820  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.317639  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.603139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.614668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.750509  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.817740  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.103982  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.113850  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.250679  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.604766  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.615339  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.749664  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.817244  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.105520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.115165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.323737  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.603415  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.614694  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.750384  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.817336  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.104015  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.113900  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.250650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.316397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.603826  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.613857  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.750135  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.817184  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.103139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.114040  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.250197  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.316961  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.603106  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.613879  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.753191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.816593  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.104633  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.114511  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.249966  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.317031  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.603266  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.614360  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.750158  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.817128  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.103974  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.113579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.250363  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.317726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.603262  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.614568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.691764  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:49.753093  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.818136  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.106234  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.117011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.250613  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.317535  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.605091  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.615017  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.751316  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.817578  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.107737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.116527  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.251344  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.319605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.408757  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.716938043s)
	W1002 20:29:51.408854  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.408899  704660 retry.go:31] will retry after 12.661372424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.603144  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.614399  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.750042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.817211  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.104464  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.115011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.250151  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.316858  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.603659  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.614216  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.751315  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.817053  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.104565  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.113559  704660 kapi.go:107] duration metric: took 32.002874096s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:29:53.250114  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.317821  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.603164  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.750146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.820167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.106776  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.316832  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.603001  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.750421  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.817545  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.103737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.250894  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.316949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.603085  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.750103  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.816937  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.103610  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.250374  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.351350  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.603669  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.750222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.816995  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.103711  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.250016  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.317173  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.603412  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.749585  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.817087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.106858  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.250249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.317416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.602677  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.751843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.816975  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.104520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.250328  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.316837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.603027  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.750542  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.817568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.118971  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.260853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.324376  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.603347  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.751070  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.817027  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.116318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.249998  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.318228  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.604526  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.750944  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.818452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.104307  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.254223  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.318397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.604952  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.750890  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.817295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.106126  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.254295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.317579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.603623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.755126  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.818458  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.070964  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:04.103003  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.251061  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.317116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.604016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.750159  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.819498  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.103756  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.249080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.603780  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.751506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.820087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.861050  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790044781s)
	W1002 20:30:05.861139  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:05.861176  704660 retry.go:31] will retry after 17.393091817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:06.103387  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.250507  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.317837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:06.603460  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.750558  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.817614  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.103902  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.250598  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.316702  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.602834  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.754146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.822685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.103768  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.251042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.316848  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.603426  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.750576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.841843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.103764  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.250354  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.331806  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.605318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.750657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.817095  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.103398  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.255408  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.318022  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.603132  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.750403  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.818293  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.104225  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.250993  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.317127  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.603016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.749773  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.817866  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.103202  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.255976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.317255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.604954  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.750466  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.817799  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.121875  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.251358  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.317771  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.603035  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.749741  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.816693  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.103790  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.250141  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.317253  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.603881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.751654  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.834207  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.104408  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.249815  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.316650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.602801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.750009  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.817116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.120769  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.251147  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.352347  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.603722  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.749988  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.817248  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.104049  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.250170  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.317087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.603966  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.751038  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.817272  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.104249  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.254111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.354335  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.603774  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.750446  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.820222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.104228  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.250204  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.317641  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.603235  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.750469  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.817720  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.103219  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.249901  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.354982  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.603352  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.750342  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.816943  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.104120  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.250875  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.316432  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.604183  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.751198  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.851690  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.103478  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.249326  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:22.318236  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.605156  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.750311  704660 kapi.go:107] duration metric: took 1m1.004091859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:30:22.818417  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.103467  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.254761  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:23.317834  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.603470  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.816589  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.105925  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.317505  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.604867  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.802347  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.547475184s)
	W1002 20:30:24.802389  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.802426  704660 retry.go:31] will retry after 27.998098838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.817602  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.106548  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.317082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.603074  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.817303  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.103771  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.316828  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.603416  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.816576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.102651  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.316355  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.603434  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.816609  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.103586  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.318112  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.604364  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.816965  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.103801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.317624  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.603114  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.817415  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.103838  704660 kapi.go:107] duration metric: took 1m11.004121778s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:30:30.316991  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.817460  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.316734  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.817416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.321137  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.818165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:33.318614  704660 kapi.go:107] duration metric: took 1m9.005007455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:30:33.319986  704660 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-991638 cluster.
	I1002 20:30:33.321179  704660 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:30:33.322167  704660 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:30:52.801095  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:30:53.728667  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:30:53.728763  704660 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:30:53.731775  704660 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, registry-creds, volcano, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 20:30:53.733577  704660 addons.go:514] duration metric: took 1m44.75893549s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns registry-creds volcano storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 20:30:53.733631  704660 start.go:246] waiting for cluster config update ...
	I1002 20:30:53.733654  704660 start.go:255] writing updated cluster config ...
	I1002 20:30:53.733956  704660 ssh_runner.go:195] Run: rm -f paused
	I1002 20:30:53.738361  704660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:53.742889  704660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.750373  704660 pod_ready.go:94] pod "coredns-66bc5c9577-wkwnx" is "Ready"
	I1002 20:30:53.750443  704660 pod_ready.go:86] duration metric: took 7.51962ms for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.752616  704660 pod_ready.go:83] waiting for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.757985  704660 pod_ready.go:94] pod "etcd-addons-991638" is "Ready"
	I1002 20:30:53.758011  704660 pod_ready.go:86] duration metric: took 5.320347ms for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.760125  704660 pod_ready.go:83] waiting for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.764465  704660 pod_ready.go:94] pod "kube-apiserver-addons-991638" is "Ready"
	I1002 20:30:53.764491  704660 pod_ready.go:86] duration metric: took 4.30499ms for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.766969  704660 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.142419  704660 pod_ready.go:94] pod "kube-controller-manager-addons-991638" is "Ready"
	I1002 20:30:54.142449  704660 pod_ready.go:86] duration metric: took 375.451024ms for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.342704  704660 pod_ready.go:83] waiting for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.742276  704660 pod_ready.go:94] pod "kube-proxy-xfnp6" is "Ready"
	I1002 20:30:54.742307  704660 pod_ready.go:86] duration metric: took 399.528424ms for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.943143  704660 pod_ready.go:83] waiting for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344485  704660 pod_ready.go:94] pod "kube-scheduler-addons-991638" is "Ready"
	I1002 20:30:55.344522  704660 pod_ready.go:86] duration metric: took 401.35166ms for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344539  704660 pod_ready.go:40] duration metric: took 1.606141213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:55.401584  704660 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:30:55.403167  704660 out.go:179] * Done! kubectl is now configured to use "addons-991638" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 20:37:34 addons-991638 dockerd[1126]: time="2025-10-02T20:37:34.590316757Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:37:34 addons-991638 dockerd[1126]: time="2025-10-02T20:37:34.698948485Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:37:59 addons-991638 dockerd[1126]: time="2025-10-02T20:37:59.594170320Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:37:59 addons-991638 dockerd[1126]: time="2025-10-02T20:37:59.693224991Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:38:28 addons-991638 dockerd[1126]: time="2025-10-02T20:38:28.880773467Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:38:28 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:38:28Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:38:52 addons-991638 dockerd[1126]: time="2025-10-02T20:38:52.598298706Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:38:52 addons-991638 dockerd[1126]: time="2025-10-02T20:38:52.695241277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:39:19 addons-991638 dockerd[1126]: time="2025-10-02T20:39:19.064695814Z" level=info msg="ignoring event" container=ebfdaff2b198a80d680ea12030e69d14c1b5ef229534a3e53bbf351bc1f4ea72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:39:49 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:39:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1835b72aeeced3971cf822ef77d27ad8cd784390aa0a66af560f9268e113c031/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:39:49 addons-991638 dockerd[1126]: time="2025-10-02T20:39:49.555100510Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:39:49 addons-991638 dockerd[1126]: time="2025-10-02T20:39:49.653392743Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:40:04 addons-991638 dockerd[1126]: time="2025-10-02T20:40:04.601907626Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:40:04 addons-991638 dockerd[1126]: time="2025-10-02T20:40:04.701860787Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:40:10 addons-991638 dockerd[1126]: time="2025-10-02T20:40:10.909021324Z" level=info msg="ignoring event" container=1835b72aeeced3971cf822ef77d27ad8cd784390aa0a66af560f9268e113c031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:40:35 addons-991638 dockerd[1126]: time="2025-10-02T20:40:35.869102148Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=7f30da4857b7188512501231d35f21e1f1e1dc4a418f640944f2b50bb5df9d48
	Oct 02 20:40:35 addons-991638 dockerd[1126]: time="2025-10-02T20:40:35.897953422Z" level=info msg="ignoring event" container=7f30da4857b7188512501231d35f21e1f1e1dc4a418f640944f2b50bb5df9d48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:40:36 addons-991638 dockerd[1126]: time="2025-10-02T20:40:36.026665728Z" level=info msg="ignoring event" container=dbb862b79dae1069fc1e7ecd334b4a669dabe21d19fc025ccdec4bf3469535fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:40:54 addons-991638 dockerd[1126]: time="2025-10-02T20:40:54.891099449Z" level=info msg="ignoring event" container=2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:40:55 addons-991638 dockerd[1126]: time="2025-10-02T20:40:55.010741437Z" level=info msg="ignoring event" container=5e5723de853e6685797f30e12e3333d3571b244424fcd2b2e30c869df849261b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:41:01 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:41:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3c1b0f6ab5db80afc21eae627bf1d5aee9e545d51a0938291f2edf00ebb13f6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:41:01 addons-991638 dockerd[1126]: time="2025-10-02T20:41:01.749163826Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:41:01 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:41:01Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 02 20:41:13 addons-991638 dockerd[1126]: time="2025-10-02T20:41:13.770633632Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:41:15 addons-991638 dockerd[1126]: time="2025-10-02T20:41:15.784248657Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	47dac9cf297c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   bbce1f80c46b4       busybox                                    default
	810d41d3d1f91       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             11 minutes ago      Running             controller                               0                   38baae6c52ebc       ingress-nginx-controller-9cc49f96f-g6rz7   ingress-nginx
	7fe1ae5b58acc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	087c9272590bb       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f673a92f38d37       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	26e913322af4f       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f33b41dff54c1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	8c93b919c5b4b       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   a9a8d56da7da5       csi-hostpath-resizer-0                     kube-system
	714339ab4a604       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	3afb513dbbbaa       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   5c0161b7af378       csi-hostpath-attacher-0                    kube-system
	3ef8d0f1a48cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   11 minutes ago      Exited              patch                                    0                   bf2651aa1dde2       ingress-nginx-admission-patch-z8w27        ingress-nginx
	0612a088672a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   11 minutes ago      Exited              create                                   0                   3e77d9aaaed22       ingress-nginx-admission-create-h2p7z       ingress-nginx
	edb7914b91d73       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   063272a1fd848       snapshot-controller-7d9fbc56b8-n92kj       kube-system
	df4c807a71bc6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            11 minutes ago      Running             gadget                                   0                   2dffa89109ee8       gadget-gq5qh                               gadget
	eebe9684b11cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   30e397fdcba62       snapshot-controller-7d9fbc56b8-htvkn       kube-system
	dc6958ff54fd4       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         11 minutes ago      Running             minikube-ingress-dns                     0                   c8ba98b08e917       kube-ingress-dns-minikube                  kube-system
	7b7e993c0e79f       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   48962134af601       storage-provisioner                        kube-system
	6691f55a72958       138784d87c9c5                                                                                                                                12 minutes ago      Running             coredns                                  0                   8d8b118e8d1e4       coredns-66bc5c9577-wkwnx                   kube-system
	484f1ee7ca6c4       05baa95f5142d                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   9057048c41ea1       kube-proxy-xfnp6                           kube-system
	5dc910c8154e4       a1894772a478e                                                                                                                                12 minutes ago      Running             etcd                                     0                   c6f607736ce1a       etcd-addons-991638                         kube-system
	14517010441e5       b5f57ec6b9867                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   45e90d4f82e13       kube-scheduler-addons-991638               kube-system
	aac6857cf97a0       7eb2c6ff0c5a7                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   b61da85a9eb0e       kube-controller-manager-addons-991638      kube-system
	a59993882d357       43911e833d64d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   36c3274520a66       kube-apiserver-addons-991638               kube-system
	
	
	==> controller_ingress [810d41d3d1f9] <==
	I1002 20:30:30.910173       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1002 20:30:30.910704       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:30:30.918480       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1002 20:30:30.918697       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-g6rz7"
	I1002 20:30:30.924600       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.934073       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.957588       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:30:30.957659       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1002 20:30:30.957685       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:00.654802       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1002 20:41:00.656479       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1002 20:41:00.660596       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W1002 20:41:00.661083       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1002 20:41:00.666258       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:41:00.669648       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"001d4343-4f08-46c8-902f-8636f6279caa", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2969", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I1002 20:41:00.711843       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:41:00.712787       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:03.995039       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:03.995734       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:41:04.039335       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:41:04.039868       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1002 20:41:07.329886       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:30.926334       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W1002 20:41:30.933118       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1002 20:41:30.933853       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"001d4343-4f08-46c8-902f-8636f6279caa", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3040", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	
	==> coredns [6691f55a7295] <==
	[INFO] 10.244.0.7:47201 - 40794 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002771285s
	[INFO] 10.244.0.7:47201 - 57423 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000191904s
	[INFO] 10.244.0.7:47201 - 29961 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108481s
	[INFO] 10.244.0.7:35713 - 8952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191206s
	[INFO] 10.244.0.7:35713 - 8475 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100112s
	[INFO] 10.244.0.7:33033 - 27442 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128445s
	[INFO] 10.244.0.7:33033 - 27253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087024s
	[INFO] 10.244.0.7:45040 - 19609 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108638s
	[INFO] 10.244.0.7:45040 - 19412 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134558s
	[INFO] 10.244.0.7:37712 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001243118s
	[INFO] 10.244.0.7:37712 - 41124 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001461721s
	[INFO] 10.244.0.7:56368 - 25712 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121651s
	[INFO] 10.244.0.7:56368 - 25933 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087615s
	[INFO] 10.244.0.26:33665 - 7524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225356s
	[INFO] 10.244.0.26:36616 - 9923 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170948s
	[INFO] 10.244.0.26:57364 - 60911 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153093s
	[INFO] 10.244.0.26:49778 - 1221 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113478s
	[INFO] 10.244.0.26:50758 - 6790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157762s
	[INFO] 10.244.0.26:47970 - 38720 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085318s
	[INFO] 10.244.0.26:47839 - 36929 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002380387s
	[INFO] 10.244.0.26:52240 - 40464 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002084794s
	[INFO] 10.244.0.26:58902 - 63295 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001598231s
	[INFO] 10.244.0.26:38424 - 57615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001549484s
	[INFO] 10.244.0.29:36958 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254756s
	[INFO] 10.244.0.29:59866 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178841s
	
	
	==> describe nodes <==
	Name:               addons-991638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-991638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-991638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-991638
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-991638"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-991638
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:41:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:40:37 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:40:37 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:40:37 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:40:37 +0000   Thu, 02 Oct 2025 20:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-991638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 72f32394f70644d59920eb3322dfa720
	  System UUID:                86ebb095-120f-4f4a-aceb-13d70f79315b
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-gq5qh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-g6rz7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-wkwnx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-22xqp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-991638                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-991638                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-991638       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xfnp6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-991638                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-creds-764b6fb674-nsjx4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-htvkn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-n92kj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (3%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node addons-991638 event: Registered Node addons-991638 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-991638 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5dc910c8154e] <==
	{"level":"warn","ts":"2025-10-02T20:28:59.796857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.825855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.835763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.861875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.881048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.889633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.959804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.946219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.972286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.836192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.866041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.877941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.897162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.933812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.977588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.014404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.063387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.106303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.178294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.193258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.208837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.237195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:58.669143Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1750}
	{"level":"info","ts":"2025-10-02T20:38:58.735928Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1750,"took":"66.108561ms","hash":2247637866,"current-db-size-bytes":10399744,"current-db-size":"10 MB","current-db-size-in-use-bytes":6627328,"current-db-size-in-use":"6.6 MB"}
	{"level":"info","ts":"2025-10-02T20:38:58.735983Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2247637866,"revision":1750,"compact-revision":-1}
	
	
	==> kernel <==
	 20:41:31 up  3:23,  0 user,  load average: 1.42, 1.51, 2.19
	Linux addons-991638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [a59993882d35] <==
	I1002 20:34:16.450854       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.114908       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.153003       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.178981       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.225384       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.248672       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.538179       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:17.852979       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1002 20:34:17.905403       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:18.021722       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1002 20:34:18.022097       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1002 20:34:18.153528       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	I1002 20:34:18.220255       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:18.323204       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1002 20:34:18.375763       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1002 20:34:18.407803       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1002 20:34:19.216280       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 20:34:19.501373       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 20:34:36.832244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45764: use of closed network connection
	E1002 20:34:37.126713       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45804: use of closed network connection
	E1002 20:34:37.290602       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45832: use of closed network connection
	I1002 20:35:11.208106       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.127.144"}
	I1002 20:39:00.779812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:41:00.657613       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 20:41:00.978461       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.49.99"}
	
	
	==> kube-controller-manager [aac6857cf97a] <==
	E1002 20:40:36.460127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:40:37.886396       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:40:38.422493       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:40:38.423575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:40:46.292995       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:40:46.294200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:40:47.186048       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:40:47.187280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:40:47.267484       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:40:47.268686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:40:52.886632       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:40:59.610924       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:40:59.612024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:41:07.887537       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:41:09.560771       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:41:09.562112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:41:19.681717       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:41:19.682978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:41:21.984743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:41:21.988053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:41:22.887812       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 20:41:25.279293       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:41:25.280372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:41:29.731793       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:41:29.732844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [484f1ee7ca6c] <==
	I1002 20:29:10.144358       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:29:10.287533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:29:10.388187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:29:10.388220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:29:10.388302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:29:10.427067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:29:10.427117       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:29:10.431953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:29:10.432214       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:29:10.432229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:29:10.433939       1 config.go:200] "Starting service config controller"
	I1002 20:29:10.433950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:29:10.433980       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:29:10.433985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:29:10.433996       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:29:10.434000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:29:10.435854       1 config.go:309] "Starting node config controller"
	I1002 20:29:10.435864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:29:10.435871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:29:10.535044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:29:10.535084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:29:10.535128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14517010441e] <==
	E1002 20:29:00.811484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:00.815087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:00.815264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:00.815378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:29:00.815413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:00.815443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:00.815517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:00.815547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:00.815654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:29:00.815692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:29:00.815742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:29:01.619085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:29:01.626118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:29:01.726859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:01.845808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:01.894559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:01.899233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:29:01.914113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:29:01.933506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:01.941316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:02.102088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:02.108982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:02.129471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:29:02.240337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 20:29:04.797841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:40:55 addons-991638 kubelet[2264]: I1002 20:40:55.666352    2264 scope.go:117] "RemoveContainer" containerID="2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73"
	Oct 02 20:40:55 addons-991638 kubelet[2264]: I1002 20:40:55.698980    2264 scope.go:117] "RemoveContainer" containerID="2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73"
	Oct 02 20:40:55 addons-991638 kubelet[2264]: E1002 20:40:55.699918    2264 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73" containerID="2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73"
	Oct 02 20:40:55 addons-991638 kubelet[2264]: I1002 20:40:55.699965    2264 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73"} err="failed to get container status \"2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2380c15f69fdfeb2fb998cc728100edfe014b7739633a4f817980610f8489f73"
	Oct 02 20:40:57 addons-991638 kubelet[2264]: I1002 20:40:57.556551    2264 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34ac532-4ae3-4ba7-a7fb-9f87c37f5519" path="/var/lib/kubelet/pods/f34ac532-4ae3-4ba7-a7fb-9f87c37f5519/volumes"
	Oct 02 20:40:59 addons-991638 kubelet[2264]: E1002 20:40:59.546243    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:41:01 addons-991638 kubelet[2264]: I1002 20:41:01.088401    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlw9\" (UniqueName: \"kubernetes.io/projected/c8ef7872-d301-45cb-9b5c-e7fc2319c39a-kube-api-access-7zlw9\") pod \"nginx\" (UID: \"c8ef7872-d301-45cb-9b5c-e7fc2319c39a\") " pod="default/nginx"
	Oct 02 20:41:01 addons-991638 kubelet[2264]: E1002 20:41:01.753731    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:41:01 addons-991638 kubelet[2264]: E1002 20:41:01.753789    2264 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:41:01 addons-991638 kubelet[2264]: E1002 20:41:01.753861    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(c8ef7872-d301-45cb-9b5c-e7fc2319c39a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:41:01 addons-991638 kubelet[2264]: E1002 20:41:01.753892    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:41:02 addons-991638 kubelet[2264]: E1002 20:41:02.773361    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:41:08 addons-991638 kubelet[2264]: I1002 20:41:08.545102    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:41:13 addons-991638 kubelet[2264]: E1002 20:41:13.773583    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:41:13 addons-991638 kubelet[2264]: E1002 20:41:13.773676    2264 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:41:13 addons-991638 kubelet[2264]: E1002 20:41:13.773752    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:41:13 addons-991638 kubelet[2264]: E1002 20:41:13.773785    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:41:15 addons-991638 kubelet[2264]: E1002 20:41:15.788087    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:41:15 addons-991638 kubelet[2264]: E1002 20:41:15.788144    2264 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:41:15 addons-991638 kubelet[2264]: E1002 20:41:15.788215    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(c8ef7872-d301-45cb-9b5c-e7fc2319c39a): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:41:15 addons-991638 kubelet[2264]: E1002 20:41:15.788251    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:41:26 addons-991638 kubelet[2264]: E1002 20:41:26.546962    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c8ef7872-d301-45cb-9b5c-e7fc2319c39a"
	Oct 02 20:41:27 addons-991638 kubelet[2264]: E1002 20:41:27.547005    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:41:31 addons-991638 kubelet[2264]: E1002 20:41:31.316100    2264 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 20:41:31 addons-991638 kubelet[2264]: E1002 20:41:31.316194    2264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds podName:915a1770-063b-4100-8bfa-c7e4d2680639 nodeName:}" failed. No retries permitted until 2025-10-02 20:43:33.316175748 +0000 UTC m=+869.874397214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds") pod "registry-creds-764b6fb674-nsjx4" (UID: "915a1770-063b-4100-8bfa-c7e4d2680639") : secret "registry-creds-gcr" not found
	
	
	==> storage-provisioner [7b7e993c0e79] <==
	W1002 20:41:05.710365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:07.713145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:07.721394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:09.724999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:09.729768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:11.734055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:11.740776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:13.743576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:13.748867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:15.752053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:15.756508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:17.759555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:17.764464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:19.767383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:19.774329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:21.778426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:21.783216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:23.786655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:23.791544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:25.795481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:25.799962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:27.803540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:27.810119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:29.813146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:41:29.821735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
helpers_test.go:269: (dbg) Run:  kubectl --context addons-991638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4: exit status 1 (119.455713ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991638/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:41:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zlw9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zlw9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  32s                default-scheduler  Successfully assigned default/nginx to addons-991638
	  Warning  Failed     31s                kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    17s (x2 over 31s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s (x2 over 31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     17s                kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x2 over 30s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6s (x2 over 30s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991638/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:35:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sxbjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-991638
	  Warning  Failed     4m28s (x4 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m4s (x5 over 6m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m4s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s (x20 over 6m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    46s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6vpp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-p6vpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h2p7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z8w27" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nsjx4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-991638 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.926471005s)
--- FAIL: TestAddons/parallel/CSI (372.04s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-991638 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-991638 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-991638 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.329µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-991638
helpers_test.go:243: (dbg) docker inspect addons-991638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	        "Created": "2025-10-02T20:28:36.164446632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 705058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:28:36.229753591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/hosts",
	        "LogPath": "/var/lib/docker/containers/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84/ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84-json.log",
	        "Name": "/addons-991638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-991638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-991638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac51530cb59149e0024aa33bef8282419a2f155efcb917e2e054a950d545db84",
	                "LowerDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a09dcd4fc0c74d15c4e01fc62e2b1752004de2364cae328fd8afc568951953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-991638",
	                "Source": "/var/lib/docker/volumes/addons-991638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-991638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-991638",
	                "name.minikube.sigs.k8s.io": "addons-991638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "768c8a7310c370a43da0c26c5d036d5e7219705fa051b89897a391452ea6d9a6",
	            "SandboxKey": "/var/run/docker/netns/768c8a7310c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-991638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a0:60:40:27:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05f483610a0fe679b5a4ae4efa1318f553b88c9d264d6b136b55ee1eb76c3654",
	                    "EndpointID": "cbb01d4023b7a4128894d4e3144f6ccc9b60257273c0bfbde032cb7624cd4fb7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-991638",
	                        "ac51530cb591"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-991638 -n addons-991638
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 logs -n 25: (1.301335624s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ start   │ -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-625181   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ delete  │ -p download-only-545661                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-545661   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p download-docker-039409 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p download-docker-039409                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-039409 │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ start   │ --download-only -p binary-mirror-067581 --alsologtostderr --binary-mirror http://127.0.0.1:39571 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ delete  │ -p binary-mirror-067581                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-067581   │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ addons  │ disable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-991638                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ start   │ -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:30 UTC │
	│ addons  │ addons-991638 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ addons  │ addons-991638 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ ip      │ addons-991638 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ enable headlamp -p addons-991638 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	│ addons  │ addons-991638 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-991638          │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:28:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:28:10.231562  704660 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:10.231700  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.231711  704660 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:10.231716  704660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:10.232008  704660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:28:10.232510  704660 out.go:368] Setting JSON to false
	I1002 20:28:10.233399  704660 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11417,"bootTime":1759425473,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:28:10.233494  704660 start.go:140] virtualization:  
	I1002 20:28:10.236719  704660 out.go:179] * [addons-991638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:28:10.240328  704660 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:28:10.240425  704660 notify.go:220] Checking for updates...
	I1002 20:28:10.246179  704660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:28:10.249006  704660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:28:10.251947  704660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:28:10.255157  704660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:28:10.257883  704660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:28:10.260862  704660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:28:10.288692  704660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:28:10.288859  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.345302  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.335898449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.345417  704660 docker.go:318] overlay module found
	I1002 20:28:10.348598  704660 out.go:179] * Using the docker driver based on user configuration
	I1002 20:28:10.351429  704660 start.go:304] selected driver: docker
	I1002 20:28:10.351448  704660 start.go:924] validating driver "docker" against <nil>
	I1002 20:28:10.351462  704660 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:28:10.352198  704660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:28:10.405054  704660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:28:10.396474632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:28:10.405212  704660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:28:10.405467  704660 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:28:10.408345  704660 out.go:179] * Using Docker driver with root privileges
	I1002 20:28:10.411100  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:10.411184  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:10.411197  704660 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:28:10.411276  704660 start.go:348] cluster config:
	{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1002 20:28:10.414279  704660 out.go:179] * Starting "addons-991638" primary control-plane node in "addons-991638" cluster
	I1002 20:28:10.417120  704660 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:28:10.419910  704660 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:28:10.422725  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:10.422776  704660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:28:10.422791  704660 cache.go:58] Caching tarball of preloaded images
	I1002 20:28:10.422838  704660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:28:10.422873  704660 preload.go:233] Found /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:28:10.422902  704660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 20:28:10.423255  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:10.423397  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json: {Name:mk2f26d255d9ea8bd15922b678de4d5774eef391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:10.438348  704660 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:28:10.438495  704660 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:28:10.438518  704660 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:28:10.438524  704660 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:28:10.438532  704660 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:28:10.438537  704660 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:28:28.104678  704660 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:28:28.104717  704660 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:28:28.104748  704660 start.go:360] acquireMachinesLock for addons-991638: {Name:mk53aeb56b1e9fb052ee11df133ba143769d5932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:28:28.104882  704660 start.go:364] duration metric: took 113.831µs to acquireMachinesLock for "addons-991638"
	I1002 20:28:28.104912  704660 start.go:93] Provisioning new machine with config: &{Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:28:28.104985  704660 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:28:28.108517  704660 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:28:28.108807  704660 start.go:159] libmachine.API.Create for "addons-991638" (driver="docker")
	I1002 20:28:28.108861  704660 client.go:168] LocalClient.Create starting
	I1002 20:28:28.108989  704660 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem
	I1002 20:28:28.920995  704660 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem
	I1002 20:28:29.719304  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:28:29.735220  704660 cli_runner.go:211] docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:28:29.735320  704660 network_create.go:284] running [docker network inspect addons-991638] to gather additional debugging logs...
	I1002 20:28:29.735342  704660 cli_runner.go:164] Run: docker network inspect addons-991638
	W1002 20:28:29.756033  704660 cli_runner.go:211] docker network inspect addons-991638 returned with exit code 1
	I1002 20:28:29.756065  704660 network_create.go:287] error running [docker network inspect addons-991638]: docker network inspect addons-991638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-991638 not found
	I1002 20:28:29.756079  704660 network_create.go:289] output of [docker network inspect addons-991638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-991638 not found
	
	** /stderr **
	I1002 20:28:29.756173  704660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:29.772458  704660 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d5e320}
	I1002 20:28:29.772498  704660 network_create.go:124] attempt to create docker network addons-991638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:28:29.772554  704660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-991638 addons-991638
	I1002 20:28:29.829752  704660 network_create.go:108] docker network addons-991638 192.168.49.0/24 created
	I1002 20:28:29.829781  704660 kic.go:121] calculated static IP "192.168.49.2" for the "addons-991638" container
	I1002 20:28:29.829879  704660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:28:29.847391  704660 cli_runner.go:164] Run: docker volume create addons-991638 --label name.minikube.sigs.k8s.io=addons-991638 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:28:29.864875  704660 oci.go:103] Successfully created a docker volume addons-991638
	I1002 20:28:29.864995  704660 cli_runner.go:164] Run: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:28:32.119965  704660 cli_runner.go:217] Completed: docker run --rm --name addons-991638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --entrypoint /usr/bin/test -v addons-991638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.254927204s)
	I1002 20:28:32.120005  704660 oci.go:107] Successfully prepared a docker volume addons-991638
	I1002 20:28:32.120024  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:32.120045  704660 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:28:32.120115  704660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:28:36.088209  704660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-991638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.968050647s)
	I1002 20:28:36.088240  704660 kic.go:203] duration metric: took 3.968193754s to extract preloaded images to volume ...
	W1002 20:28:36.088386  704660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:28:36.088487  704660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:28:36.149550  704660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-991638 --name addons-991638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-991638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-991638 --network addons-991638 --ip 192.168.49.2 --volume addons-991638:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:28:36.432531  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Running}}
	I1002 20:28:36.459147  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:36.484423  704660 cli_runner.go:164] Run: docker exec addons-991638 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:28:36.539034  704660 oci.go:144] the created container "addons-991638" has a running status.
	I1002 20:28:36.539068  704660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa...
	I1002 20:28:37.262683  704660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:28:37.288911  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.309985  704660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:28:37.310010  704660 kic_runner.go:114] Args: [docker exec --privileged addons-991638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:28:37.369831  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:28:37.391035  704660 machine.go:93] provisionDockerMachine start ...
	I1002 20:28:37.391126  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.411223  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.411540  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.411549  704660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:28:37.553086  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.553108  704660 ubuntu.go:182] provisioning hostname "addons-991638"
	I1002 20:28:37.553169  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.575369  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.575674  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.575686  704660 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-991638 && echo "addons-991638" | sudo tee /etc/hostname
	I1002 20:28:37.721568  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-991638
	
	I1002 20:28:37.721652  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:37.747484  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:37.747789  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:37.747811  704660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-991638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-991638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-991638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:28:37.877526  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:28:37.877550  704660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-702037/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-702037/.minikube}
	I1002 20:28:37.877573  704660 ubuntu.go:190] setting up certificates
	I1002 20:28:37.877582  704660 provision.go:84] configureAuth start
	I1002 20:28:37.877644  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:37.894231  704660 provision.go:143] copyHostCerts
	I1002 20:28:37.894324  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem (1078 bytes)
	I1002 20:28:37.894448  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem (1123 bytes)
	I1002 20:28:37.894507  704660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem (1675 bytes)
	I1002 20:28:37.894559  704660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem org=jenkins.addons-991638 san=[127.0.0.1 192.168.49.2 addons-991638 localhost minikube]
	I1002 20:28:38.951532  704660 provision.go:177] copyRemoteCerts
	I1002 20:28:38.951598  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:28:38.951639  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:38.968871  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:39.069322  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:28:39.087473  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:28:39.106442  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:28:39.125193  704660 provision.go:87] duration metric: took 1.247587619s to configureAuth
	I1002 20:28:39.125222  704660 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:28:39.125407  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:28:39.125491  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.145970  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.146282  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.146299  704660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 20:28:39.282106  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 20:28:39.282131  704660 ubuntu.go:71] root file system type: overlay
	I1002 20:28:39.282235  704660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 20:28:39.282310  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.300258  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.300556  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.300651  704660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 20:28:39.442933  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 20:28:39.443023  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:39.460361  704660 main.go:141] libmachine: Using SSH client type: native
	I1002 20:28:39.460680  704660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1002 20:28:39.460703  704660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 20:28:40.382609  704660 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-02 20:28:39.437593143 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 20:28:40.382680  704660 machine.go:96] duration metric: took 2.991625077s to provisionDockerMachine
	I1002 20:28:40.382776  704660 client.go:171] duration metric: took 12.273900895s to LocalClient.Create
	I1002 20:28:40.382819  704660 start.go:167] duration metric: took 12.27401677s to libmachine.API.Create "addons-991638"
	I1002 20:28:40.382841  704660 start.go:293] postStartSetup for "addons-991638" (driver="docker")
	I1002 20:28:40.382863  704660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:28:40.382961  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:28:40.383028  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.400184  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.497649  704660 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:28:40.501057  704660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:28:40.501087  704660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:28:40.501099  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/addons for local assets ...
	I1002 20:28:40.501170  704660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/files for local assets ...
	I1002 20:28:40.501198  704660 start.go:296] duration metric: took 118.339458ms for postStartSetup
	I1002 20:28:40.501542  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.519025  704660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/config.json ...
	I1002 20:28:40.519322  704660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:28:40.519374  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.535401  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.626314  704660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:28:40.631258  704660 start.go:128] duration metric: took 12.526256292s to createHost
	I1002 20:28:40.631280  704660 start.go:83] releasing machines lock for "addons-991638", held for 12.526385541s
	I1002 20:28:40.631365  704660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-991638
	I1002 20:28:40.648027  704660 ssh_runner.go:195] Run: cat /version.json
	I1002 20:28:40.648051  704660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:28:40.648079  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.648112  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:28:40.671874  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.672768  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:28:40.765471  704660 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:40.858838  704660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:28:40.863487  704660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:28:40.863561  704660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:28:40.891689  704660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:28:40.891716  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:40.891748  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:40.891847  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:40.905197  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:28:40.914585  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:28:40.923483  704660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:28:40.923613  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:28:40.932751  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.941795  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:28:40.950514  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:28:40.959583  704660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:28:40.967941  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:28:40.976883  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:28:40.986149  704660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:28:40.995305  704660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:28:41.004003  704660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:28:41.012739  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.128237  704660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:28:41.231332  704660 start.go:495] detecting cgroup driver to use...
	I1002 20:28:41.231381  704660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:28:41.231441  704660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 20:28:41.246943  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.259982  704660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:28:41.299529  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:28:41.312040  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:28:41.325475  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:28:41.339679  704660 ssh_runner.go:195] Run: which cri-dockerd
	I1002 20:28:41.343375  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 20:28:41.351275  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 20:28:41.364332  704660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 20:28:41.484463  704660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 20:28:41.601245  704660 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 20:28:41.601360  704660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 20:28:41.614352  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 20:28:41.626868  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:41.733314  704660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 20:28:42.111293  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:28:42.128509  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 20:28:42.145965  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.163934  704660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 20:28:42.308063  704660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 20:28:42.433113  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.552919  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 20:28:42.569022  704660 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 20:28:42.582319  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:42.699949  704660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 20:28:42.769589  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:28:42.783022  704660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 20:28:42.783145  704660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 20:28:42.787107  704660 start.go:563] Will wait 60s for crictl version
	I1002 20:28:42.787194  704660 ssh_runner.go:195] Run: which crictl
	I1002 20:28:42.790829  704660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:28:42.815945  704660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 20:28:42.816103  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.842953  704660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:28:42.874688  704660 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 20:28:42.874787  704660 cli_runner.go:164] Run: docker network inspect addons-991638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:28:42.890887  704660 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:28:42.895320  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:42.906278  704660 kubeadm.go:883] updating cluster {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:28:42.906402  704660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:28:42.906467  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.925708  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.925733  704660 docker.go:621] Images already preloaded, skipping extraction
	I1002 20:28:42.925801  704660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:28:42.945361  704660 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 20:28:42.945383  704660 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:28:42.945393  704660 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1002 20:28:42.945504  704660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-991638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:28:42.945582  704660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 20:28:42.996799  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:28:42.996828  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:28:42.996844  704660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:28:42.996865  704660 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-991638 NodeName:addons-991638 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:28:42.996983  704660 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-991638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:28:42.997055  704660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:28:43.006552  704660 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:28:43.006645  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:28:43.015646  704660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1002 20:28:43.030545  704660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:28:43.044123  704660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 20:28:43.057931  704660 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:28:43.061696  704660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:28:43.072014  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:28:43.187259  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:28:43.203829  704660 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638 for IP: 192.168.49.2
	I1002 20:28:43.203899  704660 certs.go:195] generating shared ca certs ...
	I1002 20:28:43.203929  704660 certs.go:227] acquiring lock for ca certs: {Name:mk80feb87d46a3c61de00b383dd8ac7fd2e19095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:43.204734  704660 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key
	I1002 20:28:44.637131  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt ...
	I1002 20:28:44.637163  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt: {Name:mkb6d8319d3a74d42b081683e314c37e53586717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637366  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key ...
	I1002 20:28:44.637379  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key: {Name:mkbd44d267c3b1cf1fed0a906ac7bf46743d8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:44.637481  704660 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key
	I1002 20:28:45.683223  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt ...
	I1002 20:28:45.683262  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt: {Name:mkf2892474e0dfa857be215b552060af628196ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683490  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key ...
	I1002 20:28:45.683507  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key: {Name:mkb3e427bf0a6e7ceb613b926e3c90e07409da52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:45.683588  704660 certs.go:257] generating profile certs ...
	I1002 20:28:45.683654  704660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key
	I1002 20:28:45.683671  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt with IP's: []
	I1002 20:28:46.046463  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt ...
	I1002 20:28:46.046497  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: {Name:mk51f9d8abe3f7006e638458dae2df70cdaa936a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046676  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key ...
	I1002 20:28:46.046691  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.key: {Name:mke5acc604e8c4ff883546df37d116f9c766e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.046773  704660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b
	I1002 20:28:46.046795  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:28:46.569113  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b ...
	I1002 20:28:46.569145  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b: {Name:mk40a7d58b6523a132d065d0266597e722b3762d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.569955  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b ...
	I1002 20:28:46.569974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b: {Name:mkbe601cfd4f3105ca705f6de8b8f9d490a11ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.570609  704660 certs.go:382] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt
	I1002 20:28:46.570694  704660 certs.go:386] copying /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key.45e60b9b -> /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key
	I1002 20:28:46.570747  704660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key
	I1002 20:28:46.570767  704660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt with IP's: []
	I1002 20:28:46.754716  704660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt ...
	I1002 20:28:46.754747  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt: {Name:mkd0f46ec8109fe64dda020f7c270bd3d9dd6bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.754958  704660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key ...
	I1002 20:28:46.754974  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key: {Name:mk7b62b96428d619ab88e3c0c6972f37ee378b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:28:46.755195  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:28:46.755238  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:28:46.755269  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:28:46.755294  704660 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem (1675 bytes)
	I1002 20:28:46.755827  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:28:46.773406  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:28:46.790954  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:28:46.807835  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:28:46.825141  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:28:46.842372  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:28:46.860238  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:28:46.877776  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:28:46.894424  704660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:28:46.911754  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:28:46.925117  704660 ssh_runner.go:195] Run: openssl version
	I1002 20:28:46.931161  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:28:46.940887  704660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945128  704660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.945198  704660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:28:46.986089  704660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:28:46.995228  704660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:28:46.998614  704660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:28:46.998670  704660 kubeadm.go:400] StartCluster: {Name:addons-991638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-991638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:28:46.998801  704660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:28:47.017260  704660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:28:47.024934  704660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:28:47.032572  704660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:28:47.032637  704660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:28:47.040541  704660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:28:47.040563  704660 kubeadm.go:157] found existing configuration files:
	
	I1002 20:28:47.040632  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:28:47.048232  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:28:47.048324  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:28:47.055897  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:28:47.063851  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:28:47.063972  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:28:47.071920  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.079791  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:28:47.079884  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:28:47.087482  704660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:28:47.095260  704660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:28:47.095325  704660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:28:47.102743  704660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:28:47.143961  704660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:28:47.144023  704660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:28:47.171162  704660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:28:47.171292  704660 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:28:47.171362  704660 kubeadm.go:318] OS: Linux
	I1002 20:28:47.171451  704660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:28:47.171534  704660 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:28:47.171621  704660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:28:47.171707  704660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:28:47.171790  704660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:28:47.171876  704660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:28:47.171956  704660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:28:47.172038  704660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:28:47.172128  704660 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:28:47.235837  704660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:28:47.235957  704660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:28:47.236052  704660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:28:47.257841  704660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:28:47.262676  704660 out.go:252]   - Generating certificates and keys ...
	I1002 20:28:47.262771  704660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:28:47.262845  704660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:28:47.756271  704660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:28:48.584093  704660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:28:48.888267  704660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:28:49.699713  704660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:28:50.057163  704660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:28:50.057649  704660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.779363  704660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:28:50.779734  704660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-991638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:28:50.900170  704660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:28:51.497655  704660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:28:51.954519  704660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:28:51.954818  704660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:28:53.080191  704660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:28:53.266970  704660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:28:53.973649  704660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:28:54.725487  704660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:28:55.109834  704660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:28:55.110186  704660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:28:55.113467  704660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:28:55.117318  704660 out.go:252]   - Booting up control plane ...
	I1002 20:28:55.117435  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:28:55.117518  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:28:55.118060  704660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:28:55.141929  704660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:28:55.142323  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:28:55.150629  704660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:28:55.150957  704660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:28:55.151008  704660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:28:55.286296  704660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:28:55.286428  704660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:28:56.789783  704660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501225822s
	I1002 20:28:56.789937  704660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:28:56.790047  704660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:28:56.790165  704660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:28:56.790264  704660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:28:58.802179  704660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.012574504s
	I1002 20:29:00.806811  704660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.017417752s
	I1002 20:29:02.791474  704660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002021418s
	I1002 20:29:02.814104  704660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:29:02.827699  704660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:29:02.846247  704660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:29:02.846862  704660 kubeadm.go:318] [mark-control-plane] Marking the node addons-991638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:29:02.861722  704660 kubeadm.go:318] [bootstrap-token] Using token: z0jdd4.ysfi1vhms678tv6t
	I1002 20:29:02.864796  704660 out.go:252]   - Configuring RBAC rules ...
	I1002 20:29:02.864929  704660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:29:02.869885  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:29:02.888805  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:29:02.892893  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:29:02.897307  704660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:29:02.902794  704660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:29:03.198711  704660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:29:03.626604  704660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:29:04.197660  704660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:29:04.199081  704660 kubeadm.go:318] 
	I1002 20:29:04.199168  704660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:29:04.199174  704660 kubeadm.go:318] 
	I1002 20:29:04.199283  704660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:29:04.199304  704660 kubeadm.go:318] 
	I1002 20:29:04.199332  704660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:29:04.199403  704660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:29:04.199462  704660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:29:04.199470  704660 kubeadm.go:318] 
	I1002 20:29:04.199544  704660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:29:04.199559  704660 kubeadm.go:318] 
	I1002 20:29:04.199633  704660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:29:04.199648  704660 kubeadm.go:318] 
	I1002 20:29:04.199708  704660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:29:04.199805  704660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:29:04.199891  704660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:29:04.199904  704660 kubeadm.go:318] 
	I1002 20:29:04.199999  704660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:29:04.200089  704660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:29:04.200099  704660 kubeadm.go:318] 
	I1002 20:29:04.200207  704660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200351  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 \
	I1002 20:29:04.200382  704660 kubeadm.go:318] 	--control-plane 
	I1002 20:29:04.200390  704660 kubeadm.go:318] 
	I1002 20:29:04.200503  704660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:29:04.200516  704660 kubeadm.go:318] 
	I1002 20:29:04.200612  704660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z0jdd4.ysfi1vhms678tv6t \
	I1002 20:29:04.200736  704660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b5b12a6cad47572b2aeb9aba476c2fd8688fcd4a60c8ea9425f790bb5d1268d2 
	I1002 20:29:04.203776  704660 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:29:04.204016  704660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:29:04.204131  704660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:29:04.204150  704660 cni.go:84] Creating CNI manager for ""
	I1002 20:29:04.204164  704660 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:29:04.207498  704660 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:29:04.210410  704660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:29:04.217868  704660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:29:04.235604  704660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:29:04.235701  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.235739  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-991638 minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-991638 minikube.k8s.io/primary=true
	I1002 20:29:04.254399  704660 ops.go:34] apiserver oom_adj: -16
	I1002 20:29:04.369134  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:04.869740  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.370081  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:05.870196  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.369731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:06.870115  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.369228  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:07.869851  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.369279  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.869731  704660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:29:08.972720  704660 kubeadm.go:1113] duration metric: took 4.737085496s to wait for elevateKubeSystemPrivileges
	I1002 20:29:08.972751  704660 kubeadm.go:402] duration metric: took 21.974085235s to StartCluster
	I1002 20:29:08.972769  704660 settings.go:142] acquiring lock: {Name:mk05279472feb5063a5c2285eba6fd6d59490060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.972884  704660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:29:08.973255  704660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/kubeconfig: {Name:mk451cd073bc3a44904ff8d0351225145e56e5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:29:08.973483  704660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:29:08.973596  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:29:08.973840  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.973881  704660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:29:08.973962  704660 addons.go:69] Setting yakd=true in profile "addons-991638"
	I1002 20:29:08.973977  704660 addons.go:238] Setting addon yakd=true in "addons-991638"
	I1002 20:29:08.973998  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.974491  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.974944  704660 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-991638"
	I1002 20:29:08.974969  704660 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-991638"
	I1002 20:29:08.974993  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.975410  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.975798  704660 addons.go:69] Setting cloud-spanner=true in profile "addons-991638"
	I1002 20:29:08.975820  704660 addons.go:238] Setting addon cloud-spanner=true in "addons-991638"
	I1002 20:29:08.975844  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.976228  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.978568  704660 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-991638"
	I1002 20:29:08.978639  704660 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:08.978669  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.979258  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.980070  704660 out.go:179] * Verifying Kubernetes components...
	I1002 20:29:08.980299  704660 addons.go:69] Setting registry-creds=true in profile "addons-991638"
	I1002 20:29:08.980320  704660 addons.go:238] Setting addon registry-creds=true in "addons-991638"
	I1002 20:29:08.980348  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.980878  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984024  704660 addons.go:69] Setting storage-provisioner=true in profile "addons-991638"
	I1002 20:29:08.984111  704660 addons.go:238] Setting addon storage-provisioner=true in "addons-991638"
	I1002 20:29:08.985311  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984905  704660 addons.go:69] Setting default-storageclass=true in profile "addons-991638"
	I1002 20:29:08.986095  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-991638"
	I1002 20:29:08.986385  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.997940  704660 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-991638"
	I1002 20:29:08.997997  704660 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-991638"
	I1002 20:29:08.998330  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984914  704660 addons.go:69] Setting gcp-auth=true in profile "addons-991638"
	I1002 20:29:08.998967  704660 mustload.go:65] Loading cluster: addons-991638
	I1002 20:29:08.999148  704660 config.go:182] Loaded profile config "addons-991638": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:29:08.999394  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984921  704660 addons.go:69] Setting ingress=true in profile "addons-991638"
	I1002 20:29:09.012451  704660 addons.go:238] Setting addon ingress=true in "addons-991638"
	I1002 20:29:09.012506  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.012981  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.017454  704660 addons.go:69] Setting volcano=true in profile "addons-991638"
	I1002 20:29:09.017490  704660 addons.go:238] Setting addon volcano=true in "addons-991638"
	I1002 20:29:09.017527  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.018061  704660 addons.go:69] Setting volumesnapshots=true in profile "addons-991638"
	I1002 20:29:09.018133  704660 addons.go:238] Setting addon volumesnapshots=true in "addons-991638"
	I1002 20:29:09.018173  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:08.984925  704660 addons.go:69] Setting ingress-dns=true in profile "addons-991638"
	I1002 20:29:09.025533  704660 addons.go:238] Setting addon ingress-dns=true in "addons-991638"
	I1002 20:29:09.025587  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.026063  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.044490  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984928  704660 addons.go:69] Setting inspektor-gadget=true in profile "addons-991638"
	I1002 20:29:09.049039  704660 addons.go:238] Setting addon inspektor-gadget=true in "addons-991638"
	I1002 20:29:09.049079  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.049563  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984931  704660 addons.go:69] Setting metrics-server=true in profile "addons-991638"
	I1002 20:29:09.074105  704660 addons.go:238] Setting addon metrics-server=true in "addons-991638"
	I1002 20:29:09.074149  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.075253  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984945  704660 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-991638"
	I1002 20:29:09.101041  704660 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-991638"
	I1002 20:29:09.101085  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.101634  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.134221  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.984949  704660 addons.go:69] Setting registry=true in profile "addons-991638"
	I1002 20:29:09.134685  704660 addons.go:238] Setting addon registry=true in "addons-991638"
	I1002 20:29:09.134721  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.135150  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.166068  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:08.985251  704660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:29:09.210573  704660 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:29:09.222512  704660 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:29:09.228645  704660 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:09.228697  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:29:09.228802  704660 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:09.228834  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:29:09.228917  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.232353  704660 addons.go:238] Setting addon default-storageclass=true in "addons-991638"
	I1002 20:29:09.232403  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.232836  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.240129  704660 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:29:09.228818  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252033  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:29:09.281457  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.289194  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:09.276652  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:29:09.291469  704660 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:29:09.291547  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.252086  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317140  704660 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-991638"
	I1002 20:29:09.317269  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:09.317905  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:09.321130  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:29:09.324328  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:29:09.329618  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:29:09.329846  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:09.329862  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:29:09.329924  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.330072  704660 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:29:09.332483  704660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:09.332506  704660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:29:09.332556  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.352512  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.359187  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:29:09.364275  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:29:09.364559  704660 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:09.364575  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:29:09.364638  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.375690  704660 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 20:29:09.375940  704660 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:29:09.386355  704660 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:09.386396  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:29:09.386476  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.402265  704660 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 20:29:09.412773  704660 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 20:29:09.418587  704660 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:09.418666  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 20:29:09.418775  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.419320  704660 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:29:09.423729  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:29:09.423757  704660 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:29:09.423846  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.441567  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:29:09.442010  704660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:29:09.447860  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:29:09.451279  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:29:09.453459  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:09.453480  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:29:09.453561  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.455757  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:29:09.455822  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:29:09.455914  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.465113  704660 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:29:09.469477  704660 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:29:09.469509  704660 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:29:09.469576  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.479455  704660 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 20:29:09.482830  704660 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:29:09.487219  704660 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:29:09.487285  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:29:09.487386  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.498491  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.506413  704660 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:29:09.509491  704660 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:29:09.509670  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:29:09.509687  704660 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:29:09.509759  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.515326  704660 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:09.515349  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:29:09.515413  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.556794  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.592629  704660 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:29:09.595721  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.601773  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.604845  704660 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:29:09.607957  704660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:09.607982  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:29:09.608078  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:09.639621  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.660885  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.690935  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.696294  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.717153  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.743500  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.746463  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.751738  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.757583  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.764350  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.764394  704660 retry.go:31] will retry after 315.573784ms: ssh: handshake failed: EOF
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:09.769733  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	W1002 20:29:09.784428  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:09.784456  704660 retry.go:31] will retry after 304.179518ms: ssh: handshake failed: EOF
	I1002 20:29:09.898194  704660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:29:09.936055  704660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:29:10.111040  704660 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:29:10.111126  704660 retry.go:31] will retry after 465.641139ms: ssh: handshake failed: EOF
	I1002 20:29:10.668679  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:29:10.668702  704660 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:29:10.797217  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:29:10.797297  704660 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:29:10.865274  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:29:10.881693  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:29:10.881716  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:29:10.886079  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:29:10.921408  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:29:10.943803  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:29:10.943828  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:29:10.978775  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:29:10.978805  704660 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:29:10.994840  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:29:11.011037  704660 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:29:11.011073  704660 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:29:11.030493  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:29:11.032022  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:29:11.032044  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:29:11.035800  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:29:11.035830  704660 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:29:11.071721  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 20:29:11.091723  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:29:11.106681  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:29:11.145109  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:29:11.145139  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:29:11.148280  704660 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.148309  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:29:11.202167  704660 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.202196  704660 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:29:11.305203  704660 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.305232  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:29:11.316393  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:29:11.329281  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:29:11.329312  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:29:11.355129  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:29:11.398833  704660 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.398857  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:29:11.409753  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:29:11.409781  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:29:11.426941  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:11.428747  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:29:11.489773  704660 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:29:11.489841  704660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:29:11.494567  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:29:11.542853  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:29:11.615125  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:29:11.615198  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:29:11.677959  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:29:11.678040  704660 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:29:11.863554  704660 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:29:11.863639  704660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:29:12.043926  704660 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.044010  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:29:12.200094  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:29:12.200165  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:29:12.470826  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:12.509295  704660 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.573157378s)
	I1002 20:29:12.509455  704660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.611238205s)
	I1002 20:29:12.509528  704660 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:29:12.511038  704660 node_ready.go:35] waiting up to 6m0s for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515289  704660 node_ready.go:49] node "addons-991638" is "Ready"
	I1002 20:29:12.515313  704660 node_ready.go:38] duration metric: took 3.935549ms for node "addons-991638" to be "Ready" ...
	I1002 20:29:12.515328  704660 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:29:12.515389  704660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:29:12.613485  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:29:12.613555  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:29:12.794628  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.92930886s)
	I1002 20:29:13.024378  704660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-991638" context rescaled to 1 replicas
	I1002 20:29:13.094487  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:29:13.094553  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:29:13.666276  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:29:13.666353  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:29:14.220703  704660 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:14.220782  704660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:29:14.633137  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:29:16.743396  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:29:16.743479  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:16.772705  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:17.648047  704660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:29:17.758402  704660 addons.go:238] Setting addon gcp-auth=true in "addons-991638"
	I1002 20:29:17.758451  704660 host.go:66] Checking if "addons-991638" exists ...
	I1002 20:29:17.758915  704660 cli_runner.go:164] Run: docker container inspect addons-991638 --format={{.State.Status}}
	I1002 20:29:17.782244  704660 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:29:17.782296  704660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-991638
	I1002 20:29:17.815647  704660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/addons-991638/id_rsa Username:docker}
	I1002 20:29:19.091966  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.205841491s)
	I1002 20:29:19.092058  704660 addons.go:479] Verifying addon ingress=true in "addons-991638"
	I1002 20:29:19.092330  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.170806627s)
	I1002 20:29:19.092745  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.097877392s)
	I1002 20:29:19.092800  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.06227576s)
	I1002 20:29:19.095718  704660 out.go:179] * Verifying ingress addon...
	I1002 20:29:19.099717  704660 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:29:19.283832  704660 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:29:19.283853  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:19.648674  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.108386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:20.606825  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.102257  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.030489478s)
	I1002 20:29:21.102331  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.01058393s)
	I1002 20:29:21.102523  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995812674s)
	I1002 20:29:21.102576  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.786160691s)
	I1002 20:29:21.102665  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.747515739s)
	I1002 20:29:21.102736  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.675772832s)
	W1002 20:29:21.102757  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102773  704660 retry.go:31] will retry after 165.427061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:21.102843  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.674073931s)
	I1002 20:29:21.102857  704660 addons.go:479] Verifying addon metrics-server=true in "addons-991638"
	I1002 20:29:21.102896  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.608257689s)
	I1002 20:29:21.102908  704660 addons.go:479] Verifying addon registry=true in "addons-991638"
	I1002 20:29:21.103092  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.560138876s)
	I1002 20:29:21.103416  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.632501338s)
	W1002 20:29:21.103659  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.103480  704660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.588080107s)
	I1002 20:29:21.103716  704660 api_server.go:72] duration metric: took 12.130202438s to wait for apiserver process to appear ...
	I1002 20:29:21.103723  704660 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:29:21.103737  704660 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:29:21.104569  704660 retry.go:31] will retry after 131.465799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:29:21.106517  704660 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-991638 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:29:21.106623  704660 out.go:179] * Verifying registry addon...
	I1002 20:29:21.110687  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:29:21.128889  704660 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:29:21.146707  704660 api_server.go:141] control plane version: v1.34.1
	I1002 20:29:21.146750  704660 api_server.go:131] duration metric: took 43.020902ms to wait for apiserver health ...
	I1002 20:29:21.146760  704660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:29:21.231778  704660 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:29:21.231803  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.232570  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.232990  704660 system_pods.go:59] 16 kube-system pods found
	I1002 20:29:21.233027  704660 system_pods.go:61] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233037  704660 system_pods.go:61] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.233049  704660 system_pods.go:61] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.233054  704660 system_pods.go:61] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.233058  704660 system_pods.go:61] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.233072  704660 system_pods.go:61] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.233077  704660 system_pods.go:61] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.233082  704660 system_pods.go:61] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.233093  704660 system_pods.go:61] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.233100  704660 system_pods.go:61] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.233110  704660 system_pods.go:61] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.233117  704660 system_pods.go:61] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.233126  704660 system_pods.go:61] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.233138  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233145  704660 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.233152  704660 system_pods.go:61] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.233159  704660 system_pods.go:74] duration metric: took 86.393348ms to wait for pod list to return data ...
	I1002 20:29:21.233171  704660 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:29:21.236551  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:29:21.269271  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:29:21.290207  704660 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:29:21.375005  704660 default_sa.go:45] found service account: "default"
	I1002 20:29:21.375031  704660 default_sa.go:55] duration metric: took 141.854284ms for default service account to be created ...
	I1002 20:29:21.375042  704660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:29:21.403678  704660 system_pods.go:86] 17 kube-system pods found
	I1002 20:29:21.403714  704660 system_pods.go:89] "coredns-66bc5c9577-pf6sn" [11eec08f-4fa4-47ae-a3f2-01bcc98aea4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403724  704660 system_pods.go:89] "coredns-66bc5c9577-wkwnx" [9f8017e9-2372-43e8-89c4-99b231e4c28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:29:21.403730  704660 system_pods.go:89] "csi-hostpath-attacher-0" [e1b49a9e-cc2c-43ad-a104-7517ae3b9b71] Pending
	I1002 20:29:21.403736  704660 system_pods.go:89] "etcd-addons-991638" [d4335455-400f-49fd-8096-d02ef2d0150d] Running
	I1002 20:29:21.403740  704660 system_pods.go:89] "kube-apiserver-addons-991638" [02259c45-07fd-469a-9b8c-6403b37f1167] Running
	I1002 20:29:21.403744  704660 system_pods.go:89] "kube-controller-manager-addons-991638" [4f302466-70be-4234-8140-bb95629da2c2] Running
	I1002 20:29:21.403751  704660 system_pods.go:89] "kube-ingress-dns-minikube" [4ae125c8-8e3a-414c-9e23-6d7842a41075] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:29:21.403755  704660 system_pods.go:89] "kube-proxy-xfnp6" [1c9ffe26-411a-449b-aec4-3c5aab622da3] Running
	I1002 20:29:21.403760  704660 system_pods.go:89] "kube-scheduler-addons-991638" [46f2da79-4763-4e7e-80d3-eca22f15f252] Running
	I1002 20:29:21.403767  704660 system_pods.go:89] "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:29:21.403774  704660 system_pods.go:89] "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:29:21.403789  704660 system_pods.go:89] "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:29:21.403795  704660 system_pods.go:89] "registry-creds-764b6fb674-nsjx4" [915a1770-063b-4100-8bfa-c7e4d2680639] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:29:21.403857  704660 system_pods.go:89] "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:29:21.403871  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-htvkn" [c8246e64-b5a7-4ad2-91f2-7f5368d9668a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403878  704660 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n92kj" [d7c03bb8-b197-4d6e-ae66-f0f72a2f4a28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:29:21.403881  704660 system_pods.go:89] "storage-provisioner" [fe3b9f21-0c27-4228-85a3-cd2441baab3f] Running
	I1002 20:29:21.403889  704660 system_pods.go:126] duration metric: took 28.840694ms to wait for k8s-apps to be running ...
	I1002 20:29:21.403905  704660 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:29:21.403962  704660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:29:21.633145  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:21.633273  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:21.719440  704660 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.937165373s)
	I1002 20:29:21.723512  704660 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:29:21.737044  704660 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:29:21.739233  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.105985614s)
	I1002 20:29:21.739269  704660 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-991638"
	I1002 20:29:21.741380  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:29:21.741407  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:29:21.741519  704660 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:29:21.746220  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:29:21.749098  704660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:29:21.749124  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:21.885645  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:29:21.885723  704660 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:29:21.999241  704660 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:21.999306  704660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:29:22.103650  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.107641  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:29:22.115675  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.249646  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:22.603835  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:22.614145  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:22.750221  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.104878  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.113990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.250841  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:23.614664  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:23.616397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:23.754661  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.028432  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.791823015s)
	I1002 20:29:24.104308  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.114739  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.250667  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.302476  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.194753737s)
	I1002 20:29:24.302845  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.033536467s)
	W1002 20:29:24.302913  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302985  704660 retry.go:31] will retry after 309.54405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:24.302944  704660 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.89841849s)
	I1002 20:29:24.303063  704660 system_svc.go:56] duration metric: took 2.899157354s WaitForService to wait for kubelet
	I1002 20:29:24.303086  704660 kubeadm.go:586] duration metric: took 15.329570576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:29:24.303134  704660 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:29:24.305338  704660 addons.go:479] Verifying addon gcp-auth=true in "addons-991638"
	I1002 20:29:24.308194  704660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:29:24.308224  704660 node_conditions.go:123] node cpu capacity is 2
	I1002 20:29:24.308238  704660 node_conditions.go:105] duration metric: took 5.087392ms to run NodePressure ...
	I1002 20:29:24.308251  704660 start.go:241] waiting for startup goroutines ...
	I1002 20:29:24.310445  704660 out.go:179] * Verifying gcp-auth addon...
	I1002 20:29:24.313602  704660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:29:24.325918  704660 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:29:24.325990  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:24.603413  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:24.613652  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:24.613983  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:24.750444  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:24.817604  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.103685  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.118065  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.249976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.317010  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.603841  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:25.613949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:25.750092  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:25.817987  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:25.957381  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.343690162s)
	W1002 20:29:25.957546  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:25.957590  704660 retry.go:31] will retry after 334.218122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:26.104386  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.114584  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.250032  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.292352  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:26.317525  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:26.604047  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:26.613938  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:26.750249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:26.817111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.103343  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.113575  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.250109  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.317078  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.444622  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.152189827s)
	W1002 20:29:27.444714  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.444752  704660 retry.go:31] will retry after 546.51266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:27.604261  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:27.614167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:27.749521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:27.817914  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:27.992173  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:28.104304  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.114156  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.249193  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.317122  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:28.603290  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:28.614437  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:28.749750  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:28.817014  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:29:28.983712  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:28.983784  704660 retry.go:31] will retry after 1.260023447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:29.103350  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.114454  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.249644  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.317067  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:29.602986  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:29.613726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:29.749688  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:29.816730  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.103822  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.114057  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.244571  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:30.250615  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:30.603619  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:30.614026  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:30.749853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:30.816479  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.103600  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.114190  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.249506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.298691  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.054084159s)
	W1002 20:29:31.298721  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.298741  704660 retry.go:31] will retry after 1.646308182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:31.316219  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:31.605040  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:31.631189  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:31.750015  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:31.817796  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.103881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.116470  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.250021  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.317307  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.604391  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:32.614775  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:32.750540  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:32.816630  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:32.946032  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:33.104871  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.115283  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.250183  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.317668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:33.603187  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:33.614529  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:33.749647  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:33.817102  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.018177  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.072106262s)
	W1002 20:29:34.018217  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.018266  704660 retry.go:31] will retry after 2.385257575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:34.104529  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.114836  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.250452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.318843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:34.603645  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:34.614617  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:34.750082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:34.817533  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.107703  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.114893  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.251718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:35.603848  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:35.613657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:35.750110  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:35.816940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.103942  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.113970  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.250099  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.316846  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:36.404147  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:36.604239  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:36.613891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:36.750685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:36.818255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.103487  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.114495  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.250302  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.316913  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:37.595720  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.191535427s)
	W1002 20:29:37.595768  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.595789  704660 retry.go:31] will retry after 3.1319796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:37.604699  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:37.613531  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:37.750080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:37.820120  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.135110  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.135518  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.251304  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.317891  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:38.603678  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:38.614208  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:38.750230  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:38.817842  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.110039  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.123577  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.253100  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.320981  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:39.606978  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:39.619008  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:39.757188  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:39.821029  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.104171  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.114472  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.250599  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.316853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:40.603622  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:40.614494  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:40.728573  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:40.750499  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:40.817269  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.103718  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.113793  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.251438  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.323113  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:41.606477  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:41.615889  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:41.749940  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:41.819471  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.104623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.115622  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.203580  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.474960878s)
	W1002 20:29:42.203682  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.203776  704660 retry.go:31] will retry after 7.48710054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:42.250824  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.317605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:42.603374  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:42.614191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:42.750400  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:42.816718  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.103173  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.114483  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.249820  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.317639  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:43.603139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:43.614668  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:43.750509  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:43.817740  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.103982  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.113850  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.250679  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.317521  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:44.604766  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:44.615339  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:44.749664  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:44.817244  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.105520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.115165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.323737  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:45.603415  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:45.614694  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:45.750384  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:45.817336  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.104015  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.113900  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.250650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.316397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:46.603826  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:46.613857  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:46.750135  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:46.817184  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.103139  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.114040  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.250197  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.316961  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:47.603106  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:47.613879  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:47.753191  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:47.816593  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.104633  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.114511  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.249966  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.317031  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:48.603266  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:48.614360  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:48.750158  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:48.817128  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.103974  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.113579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.250363  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.317726  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:49.603262  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:49.614568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:49.691764  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:29:49.753093  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:49.818136  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.106234  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.117011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.250613  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.317535  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:50.605091  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:50.615017  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:50.751316  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:50.817578  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.107737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.116527  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.251344  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.319605  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:51.408757  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.716938043s)
	W1002 20:29:51.408854  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.408899  704660 retry.go:31] will retry after 12.661372424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:29:51.603144  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:51.614399  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:51.750042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:51.817211  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.104464  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.115011  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.250151  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.316858  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:52.603659  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:52.614216  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:29:52.751315  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:52.817053  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.104565  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.113559  704660 kapi.go:107] duration metric: took 32.002874096s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:29:53.250114  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.317821  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:53.603164  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:53.750146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:53.820167  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.106776  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.250822  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.316832  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:54.603001  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:54.750421  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:54.817545  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.103737  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.250894  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.316949  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:55.603085  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:55.750103  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:55.816937  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.103610  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.250374  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.351350  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:56.603669  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:56.750222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:56.816995  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.103711  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.250016  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.317173  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:57.603412  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:57.749585  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:57.817087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.106858  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.250249  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.317416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:58.602677  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:58.751843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:58.816975  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.104520  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.250328  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.316837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:29:59.603027  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:29:59.750542  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:29:59.817568  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.118971  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.260853  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.324376  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:00.603347  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:00.751070  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:00.817027  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.116318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.249998  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.318228  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:01.604526  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:01.750944  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:01.818452  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.104307  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.254223  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.318397  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:02.604952  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:02.750890  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:02.817295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.106126  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.254295  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.317579  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:03.603623  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:03.755126  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:03.818458  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.070964  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:04.103003  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.251061  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.317116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:04.604016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:04.750159  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:04.819498  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.103756  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.249080  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.316620  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.603780  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:05.751506  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:05.820087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:05.861050  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.790044781s)
	W1002 20:30:05.861139  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:05.861176  704660 retry.go:31] will retry after 17.393091817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:06.103387  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.250507  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.317837  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:06.603460  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:06.750558  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:06.817614  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.103902  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.250598  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.316702  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:07.602834  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:07.754146  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:07.822685  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.103768  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.251042  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.316848  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:08.603426  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:08.750576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:08.841843  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.103764  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.250354  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.331806  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:09.605318  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:09.750657  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:09.817095  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.103398  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.255408  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.318022  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:10.603132  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:10.750403  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:10.818293  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.104225  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.250993  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.317127  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:11.603016  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:11.749773  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:11.817866  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.103202  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.255976  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.317255  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:12.604954  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:12.750466  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:12.817799  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.121875  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.251358  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.317771  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:13.603035  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:13.749741  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:13.816693  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.103790  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.250141  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.317253  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:14.603881  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:14.751654  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:14.834207  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.104408  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.249815  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.316650  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:15.602801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:15.750009  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:15.817116  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.120769  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.251147  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.352347  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:16.603722  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:16.749988  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:16.817248  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.104049  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.250170  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.317087  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:17.603966  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:17.751038  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:17.817272  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.104249  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.254111  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.354335  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:18.603774  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:18.750446  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:18.820222  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.104228  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.250204  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.317641  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:19.603235  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:19.750469  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:19.817720  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.103219  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.249901  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.354982  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:20.603352  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:20.750342  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:20.816943  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.104120  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.250875  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.316432  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:21.604183  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:21.751198  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:21.851690  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.103478  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.249326  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:30:22.318236  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:22.605156  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:22.750311  704660 kapi.go:107] duration metric: took 1m1.004091859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:30:22.818417  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.103467  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.254761  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:30:23.317834  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:23.603470  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:23.816589  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.105925  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.317505  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:24.604867  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:24.802347  704660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.547475184s)
	W1002 20:30:24.802389  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.802426  704660 retry.go:31] will retry after 27.998098838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:30:24.817602  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.106548  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.317082  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:25.603074  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:25.817303  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.103771  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.316828  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:26.603416  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:26.816576  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.102651  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.316355  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:27.603434  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:27.816609  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.103586  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.318112  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:28.604364  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:28.816965  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.103801  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.317624  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:29.603114  704660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:30:29.817415  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.103838  704660 kapi.go:107] duration metric: took 1m11.004121778s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:30:30.316991  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:30.817460  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.316734  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:31.817416  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.321137  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:32.818165  704660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:30:33.318614  704660 kapi.go:107] duration metric: took 1m9.005007455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:30:33.319986  704660 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-991638 cluster.
	I1002 20:30:33.321179  704660 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:30:33.322167  704660 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:30:52.801095  704660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:30:53.728667  704660 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:30:53.728763  704660 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:30:53.731775  704660 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, registry-creds, volcano, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 20:30:53.733577  704660 addons.go:514] duration metric: took 1m44.75893549s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns registry-creds volcano storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 20:30:53.733631  704660 start.go:246] waiting for cluster config update ...
	I1002 20:30:53.733654  704660 start.go:255] writing updated cluster config ...
	I1002 20:30:53.733956  704660 ssh_runner.go:195] Run: rm -f paused
	I1002 20:30:53.738361  704660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:53.742889  704660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.750373  704660 pod_ready.go:94] pod "coredns-66bc5c9577-wkwnx" is "Ready"
	I1002 20:30:53.750443  704660 pod_ready.go:86] duration metric: took 7.51962ms for pod "coredns-66bc5c9577-wkwnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.752616  704660 pod_ready.go:83] waiting for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.757985  704660 pod_ready.go:94] pod "etcd-addons-991638" is "Ready"
	I1002 20:30:53.758011  704660 pod_ready.go:86] duration metric: took 5.320347ms for pod "etcd-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.760125  704660 pod_ready.go:83] waiting for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.764465  704660 pod_ready.go:94] pod "kube-apiserver-addons-991638" is "Ready"
	I1002 20:30:53.764491  704660 pod_ready.go:86] duration metric: took 4.30499ms for pod "kube-apiserver-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:53.766969  704660 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.142419  704660 pod_ready.go:94] pod "kube-controller-manager-addons-991638" is "Ready"
	I1002 20:30:54.142449  704660 pod_ready.go:86] duration metric: took 375.451024ms for pod "kube-controller-manager-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.342704  704660 pod_ready.go:83] waiting for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.742276  704660 pod_ready.go:94] pod "kube-proxy-xfnp6" is "Ready"
	I1002 20:30:54.742307  704660 pod_ready.go:86] duration metric: took 399.528424ms for pod "kube-proxy-xfnp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:54.943143  704660 pod_ready.go:83] waiting for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344485  704660 pod_ready.go:94] pod "kube-scheduler-addons-991638" is "Ready"
	I1002 20:30:55.344522  704660 pod_ready.go:86] duration metric: took 401.35166ms for pod "kube-scheduler-addons-991638" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:30:55.344539  704660 pod_ready.go:40] duration metric: took 1.606141213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:30:55.401584  704660 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:30:55.403167  704660 out.go:179] * Done! kubectl is now configured to use "addons-991638" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 20:35:29 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:35:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/906bedcfeb0c432e06ce67cfa38d190685a2fa1ec7fdebe81ceb4c0b78e846fd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:35:30 addons-991638 dockerd[1126]: time="2025-10-02T20:35:30.304822986Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:35:45 addons-991638 dockerd[1126]: time="2025-10-02T20:35:45.788335537Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:35:45 addons-991638 dockerd[1126]: time="2025-10-02T20:35:45.837853566Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:35:45 addons-991638 dockerd[1126]: time="2025-10-02T20:35:45.941482300Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:36:11 addons-991638 dockerd[1126]: time="2025-10-02T20:36:11.765720917Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:36:30 addons-991638 dockerd[1126]: time="2025-10-02T20:36:30.593223324Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:36:30 addons-991638 dockerd[1126]: time="2025-10-02T20:36:30.687476422Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:37:03 addons-991638 dockerd[1126]: time="2025-10-02T20:37:03.591579576Z" level=info msg="ignoring event" container=5d66b2afab44ea2e4bb189a548d9c224559701f95943fe7876ed111699112818 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:37:04 addons-991638 dockerd[1126]: time="2025-10-02T20:37:04.759386648Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:37:18 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:37:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ebfdaff2b198a80d680ea12030e69d14c1b5ef229534a3e53bbf351bc1f4ea72/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:37:19 addons-991638 dockerd[1126]: time="2025-10-02T20:37:19.049516383Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:37:19 addons-991638 dockerd[1126]: time="2025-10-02T20:37:19.144237181Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:37:34 addons-991638 dockerd[1126]: time="2025-10-02T20:37:34.590316757Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:37:34 addons-991638 dockerd[1126]: time="2025-10-02T20:37:34.698948485Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:37:59 addons-991638 dockerd[1126]: time="2025-10-02T20:37:59.594170320Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:37:59 addons-991638 dockerd[1126]: time="2025-10-02T20:37:59.693224991Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:38:28 addons-991638 dockerd[1126]: time="2025-10-02T20:38:28.880773467Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:38:28 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:38:28Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:38:52 addons-991638 dockerd[1126]: time="2025-10-02T20:38:52.598298706Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:38:52 addons-991638 dockerd[1126]: time="2025-10-02T20:38:52.695241277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:39:19 addons-991638 dockerd[1126]: time="2025-10-02T20:39:19.064695814Z" level=info msg="ignoring event" container=ebfdaff2b198a80d680ea12030e69d14c1b5ef229534a3e53bbf351bc1f4ea72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:39:49 addons-991638 cri-dockerd[1428]: time="2025-10-02T20:39:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1835b72aeeced3971cf822ef77d27ad8cd784390aa0a66af560f9268e113c031/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:39:49 addons-991638 dockerd[1126]: time="2025-10-02T20:39:49.555100510Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:39:49 addons-991638 dockerd[1126]: time="2025-10-02T20:39:49.653392743Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	47dac9cf297c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   bbce1f80c46b4       busybox                                    default
	810d41d3d1f91       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   38baae6c52ebc       ingress-nginx-controller-9cc49f96f-g6rz7   ingress-nginx
	7fe1ae5b58acc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	087c9272590bb       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          9 minutes ago       Running             csi-provisioner                          0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f673a92f38d37       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            9 minutes ago       Running             liveness-probe                           0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	26e913322af4f       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           9 minutes ago       Running             hostpath                                 0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	f33b41dff54c1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                9 minutes ago       Running             node-driver-registrar                    0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	8c93b919c5b4b       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   a9a8d56da7da5       csi-hostpath-resizer-0                     kube-system
	714339ab4a604       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   c80a56727d57a       csi-hostpathplugin-22xqp                   kube-system
	3afb513dbbbaa       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             9 minutes ago       Running             csi-attacher                             0                   5c0161b7af378       csi-hostpath-attacher-0                    kube-system
	3ef8d0f1a48cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              patch                                    0                   bf2651aa1dde2       ingress-nginx-admission-patch-z8w27        ingress-nginx
	0612a088672a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              create                                   0                   3e77d9aaaed22       ingress-nginx-admission-create-h2p7z       ingress-nginx
	edb7914b91d73       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   063272a1fd848       snapshot-controller-7d9fbc56b8-n92kj       kube-system
	df4c807a71bc6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            10 minutes ago      Running             gadget                                   0                   2dffa89109ee8       gadget-gq5qh                               gadget
	eebe9684b11cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   30e397fdcba62       snapshot-controller-7d9fbc56b8-htvkn       kube-system
	7f30da4857b71       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   dbb862b79dae1       local-path-provisioner-648f6765c9-v6wrv    local-path-storage
	dc6958ff54fd4       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         10 minutes ago      Running             minikube-ingress-dns                     0                   c8ba98b08e917       kube-ingress-dns-minikube                  kube-system
	2380c15f69fdf       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        10 minutes ago      Running             metrics-server                           0                   5e5723de853e6       metrics-server-85b7d694d7-4vr85            kube-system
	7b7e993c0e79f       ba04bb24b9575                                                                                                                                10 minutes ago      Running             storage-provisioner                      0                   48962134af601       storage-provisioner                        kube-system
	6691f55a72958       138784d87c9c5                                                                                                                                10 minutes ago      Running             coredns                                  0                   8d8b118e8d1e4       coredns-66bc5c9577-wkwnx                   kube-system
	484f1ee7ca6c4       05baa95f5142d                                                                                                                                10 minutes ago      Running             kube-proxy                               0                   9057048c41ea1       kube-proxy-xfnp6                           kube-system
	5dc910c8154e4       a1894772a478e                                                                                                                                11 minutes ago      Running             etcd                                     0                   c6f607736ce1a       etcd-addons-991638                         kube-system
	14517010441e5       b5f57ec6b9867                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   45e90d4f82e13       kube-scheduler-addons-991638               kube-system
	aac6857cf97a0       7eb2c6ff0c5a7                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   b61da85a9eb0e       kube-controller-manager-addons-991638      kube-system
	a59993882d357       43911e833d64d                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   36c3274520a66       kube-apiserver-addons-991638               kube-system
	
	
	==> controller_ingress [810d41d3d1f9] <==
	W1002 20:30:28.973084       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1002 20:30:28.973239       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I1002 20:30:28.984625       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1002 20:30:29.683889       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1002 20:30:29.697215       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1002 20:30:29.706323       7 nginx.go:273] "Starting NGINX Ingress controller"
	I1002 20:30:29.718196       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b8d60449-ae96-4c13-92a1-c389e5fce3f6", APIVersion:"v1", ResourceVersion:"754", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1002 20:30:29.719963       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"5926004e-4933-4581-9e6d-0da6edb9d128", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1002 20:30:29.720147       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"76b21a16-cdca-482a-bd26-5e6fea1a4b71", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1002 20:30:30.909112       7 nginx.go:319] "Starting NGINX process"
	I1002 20:30:30.909390       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1002 20:30:30.910173       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1002 20:30:30.910704       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1002 20:30:30.918480       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1002 20:30:30.918697       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-g6rz7"
	I1002 20:30:30.924600       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.934073       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-g6rz7" node="addons-991638"
	I1002 20:30:30.957588       7 controller.go:228] "Backend successfully reloaded"
	I1002 20:30:30.957659       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1002 20:30:30.957685       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-g6rz7", UID:"28bd2348-f54e-4228-ba87-582f2b81f73f", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [6691f55a7295] <==
	[INFO] 10.244.0.7:47201 - 40794 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002771285s
	[INFO] 10.244.0.7:47201 - 57423 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000191904s
	[INFO] 10.244.0.7:47201 - 29961 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108481s
	[INFO] 10.244.0.7:35713 - 8952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191206s
	[INFO] 10.244.0.7:35713 - 8475 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100112s
	[INFO] 10.244.0.7:33033 - 27442 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128445s
	[INFO] 10.244.0.7:33033 - 27253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087024s
	[INFO] 10.244.0.7:45040 - 19609 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108638s
	[INFO] 10.244.0.7:45040 - 19412 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134558s
	[INFO] 10.244.0.7:37712 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001243118s
	[INFO] 10.244.0.7:37712 - 41124 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001461721s
	[INFO] 10.244.0.7:56368 - 25712 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121651s
	[INFO] 10.244.0.7:56368 - 25933 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087615s
	[INFO] 10.244.0.26:33665 - 7524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225356s
	[INFO] 10.244.0.26:36616 - 9923 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170948s
	[INFO] 10.244.0.26:57364 - 60911 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153093s
	[INFO] 10.244.0.26:49778 - 1221 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113478s
	[INFO] 10.244.0.26:50758 - 6790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157762s
	[INFO] 10.244.0.26:47970 - 38720 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085318s
	[INFO] 10.244.0.26:47839 - 36929 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002380387s
	[INFO] 10.244.0.26:52240 - 40464 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002084794s
	[INFO] 10.244.0.26:58902 - 63295 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001598231s
	[INFO] 10.244.0.26:38424 - 57615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001549484s
	[INFO] 10.244.0.29:36958 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254756s
	[INFO] 10.244.0.29:59866 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178841s
	
	
	==> describe nodes <==
	Name:               addons-991638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-991638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-991638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-991638
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-991638"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-991638
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:39:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:35:41 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:35:41 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:35:41 +0000   Thu, 02 Oct 2025 20:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:35:41 +0000   Thu, 02 Oct 2025 20:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-991638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 72f32394f70644d59920eb3322dfa720
	  System UUID:                86ebb095-120f-4f4a-aceb-13d70f79315b
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  gadget                      gadget-gq5qh                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-g6rz7                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-wkwnx                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-22xqp                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-991638                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-991638                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-991638                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-xfnp6                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-991638                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-85b7d694d7-4vr85                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 registry-creds-764b6fb674-nsjx4                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-htvkn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-n92kj                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  local-path-storage          local-path-provisioner-648f6765c9-v6wrv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-991638 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-991638 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-991638 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                node-controller  Node addons-991638 event: Registered Node addons-991638 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-991638 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5dc910c8154e] <==
	{"level":"warn","ts":"2025-10-02T20:28:59.796857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.825855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.835763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.861875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.881048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.889633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:28:59.959804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.946219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:22.972286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.836192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.866041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.877941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.897162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.933812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:37.977588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.014404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.063387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.106303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.178294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.193258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.208837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:29:38.237195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:58.669143Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1750}
	{"level":"info","ts":"2025-10-02T20:38:58.735928Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1750,"took":"66.108561ms","hash":2247637866,"current-db-size-bytes":10399744,"current-db-size":"10 MB","current-db-size-in-use-bytes":6627328,"current-db-size-in-use":"6.6 MB"}
	{"level":"info","ts":"2025-10-02T20:38:58.735983Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2247637866,"revision":1750,"compact-revision":-1}
	
	
	==> kernel <==
	 20:40:04 up  3:22,  0 user,  load average: 1.84, 1.50, 2.25
	Linux addons-991638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [a59993882d35] <==
	I1002 20:34:16.031557       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:16.275372       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:16.450854       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.114908       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.153003       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.178981       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 20:34:17.225384       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.248672       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1002 20:34:17.538179       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:17.852979       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1002 20:34:17.905403       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:18.021722       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1002 20:34:18.022097       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1002 20:34:18.153528       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	I1002 20:34:18.220255       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 20:34:18.323204       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1002 20:34:18.375763       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1002 20:34:18.407803       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1002 20:34:19.216280       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 20:34:19.501373       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 20:34:36.832244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45764: use of closed network connection
	E1002 20:34:37.126713       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45804: use of closed network connection
	E1002 20:34:37.290602       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45832: use of closed network connection
	I1002 20:35:11.208106       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.127.144"}
	I1002 20:39:00.779812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [aac6857cf97a] <==
	E1002 20:39:00.463556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:09.332993       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:09.334113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:17.126977       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:17.128146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:17.882259       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:17.883381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:28.579029       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:28.580071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:28.666638       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:28.667854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:28.814545       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:28.816212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:41.026615       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:41.027755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:49.839461       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:49.840851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:50.389560       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:50.390681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:52.872368       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:52.873578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:55.344402       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:55.345855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:39:55.952474       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:39:55.953640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [484f1ee7ca6c] <==
	I1002 20:29:10.144358       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:29:10.287533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:29:10.388187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:29:10.388220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:29:10.388302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:29:10.427067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:29:10.427117       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:29:10.431953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:29:10.432214       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:29:10.432229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:29:10.433939       1 config.go:200] "Starting service config controller"
	I1002 20:29:10.433950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:29:10.433980       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:29:10.433985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:29:10.433996       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:29:10.434000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:29:10.435854       1 config.go:309] "Starting node config controller"
	I1002 20:29:10.435864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:29:10.435871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:29:10.535044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:29:10.535084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:29:10.535128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14517010441e] <==
	E1002 20:29:00.811484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:00.815087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:00.815264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:00.815378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:29:00.815413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:00.815443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:00.815517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:00.815547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:00.815654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:29:00.815692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:29:00.815742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:29:01.619085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:29:01.626118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:29:01.726859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:29:01.845808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:29:01.894559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:29:01.899233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:29:01.914113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:29:01.933506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:29:01.941316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:29:02.102088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:29:02.108982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:29:02.129471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:29:02.240337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 20:29:04.797841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:39:18 addons-991638 kubelet[2264]: E1002 20:39:18.545554    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.183468    2264 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5vdm\" (UniqueName: \"kubernetes.io/projected/4ba6eb10-3992-42b9-bea5-6fcfff71feee-kube-api-access-b5vdm\") pod \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\" (UID: \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\") "
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.183534    2264 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4ba6eb10-3992-42b9-bea5-6fcfff71feee-script\") pod \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\" (UID: \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\") "
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.183555    2264 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4ba6eb10-3992-42b9-bea5-6fcfff71feee-data\") pod \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\" (UID: \"4ba6eb10-3992-42b9-bea5-6fcfff71feee\") "
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.183697    2264 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba6eb10-3992-42b9-bea5-6fcfff71feee-data" (OuterVolumeSpecName: "data") pod "4ba6eb10-3992-42b9-bea5-6fcfff71feee" (UID: "4ba6eb10-3992-42b9-bea5-6fcfff71feee"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.184425    2264 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba6eb10-3992-42b9-bea5-6fcfff71feee-script" (OuterVolumeSpecName: "script") pod "4ba6eb10-3992-42b9-bea5-6fcfff71feee" (UID: "4ba6eb10-3992-42b9-bea5-6fcfff71feee"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.185756    2264 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba6eb10-3992-42b9-bea5-6fcfff71feee-kube-api-access-b5vdm" (OuterVolumeSpecName: "kube-api-access-b5vdm") pod "4ba6eb10-3992-42b9-bea5-6fcfff71feee" (UID: "4ba6eb10-3992-42b9-bea5-6fcfff71feee"). InnerVolumeSpecName "kube-api-access-b5vdm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.284440    2264 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5vdm\" (UniqueName: \"kubernetes.io/projected/4ba6eb10-3992-42b9-bea5-6fcfff71feee-kube-api-access-b5vdm\") on node \"addons-991638\" DevicePath \"\""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.284480    2264 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4ba6eb10-3992-42b9-bea5-6fcfff71feee-script\") on node \"addons-991638\" DevicePath \"\""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.284492    2264 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4ba6eb10-3992-42b9-bea5-6fcfff71feee-data\") on node \"addons-991638\" DevicePath \"\""
	Oct 02 20:39:19 addons-991638 kubelet[2264]: I1002 20:39:19.558211    2264 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba6eb10-3992-42b9-bea5-6fcfff71feee" path="/var/lib/kubelet/pods/4ba6eb10-3992-42b9-bea5-6fcfff71feee/volumes"
	Oct 02 20:39:29 addons-991638 kubelet[2264]: E1002 20:39:29.260201    2264 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 20:39:29 addons-991638 kubelet[2264]: E1002 20:39:29.260290    2264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds podName:915a1770-063b-4100-8bfa-c7e4d2680639 nodeName:}" failed. No retries permitted until 2025-10-02 20:41:31.260272921 +0000 UTC m=+747.818494387 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/915a1770-063b-4100-8bfa-c7e4d2680639-gcr-creds") pod "registry-creds-764b6fb674-nsjx4" (UID: "915a1770-063b-4100-8bfa-c7e4d2680639") : secret "registry-creds-gcr" not found
	Oct 02 20:39:29 addons-991638 kubelet[2264]: E1002 20:39:29.545701    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:39:43 addons-991638 kubelet[2264]: E1002 20:39:43.545950    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: I1002 20:39:49.013385    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/10807586-d70e-4c45-a37d-cbdaee2756d4-script\") pod \"helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b\" (UID: \"10807586-d70e-4c45-a37d-cbdaee2756d4\") " pod="local-path-storage/helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: I1002 20:39:49.013498    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cp25\" (UniqueName: \"kubernetes.io/projected/10807586-d70e-4c45-a37d-cbdaee2756d4-kube-api-access-6cp25\") pod \"helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b\" (UID: \"10807586-d70e-4c45-a37d-cbdaee2756d4\") " pod="local-path-storage/helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: I1002 20:39:49.013525    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/10807586-d70e-4c45-a37d-cbdaee2756d4-data\") pod \"helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b\" (UID: \"10807586-d70e-4c45-a37d-cbdaee2756d4\") " pod="local-path-storage/helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: I1002 20:39:49.544842    2264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: E1002 20:39:49.657820    2264 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: E1002 20:39:49.657877    2264 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: E1002 20:39:49.657968    2264 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b_local-path-storage(10807586-d70e-4c45-a37d-cbdaee2756d4): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: E1002 20:39:49.658005    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b" podUID="10807586-d70e-4c45-a37d-cbdaee2756d4"
	Oct 02 20:39:49 addons-991638 kubelet[2264]: E1002 20:39:49.749379    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b" podUID="10807586-d70e-4c45-a37d-cbdaee2756d4"
	Oct 02 20:39:56 addons-991638 kubelet[2264]: E1002 20:39:56.545057    2264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2a4d6f31-4ea6-4dc4-9db8-8e941cc56f28"
	
	
	==> storage-provisioner [7b7e993c0e79] <==
	W1002 20:39:39.235204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:41.238375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:41.243052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:43.246314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:43.256770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:45.262367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:45.275070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:47.278844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:47.283653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:49.287883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:49.296667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:51.299908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:51.304231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:53.312117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:53.317709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:55.320301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:55.324802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:57.327891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:57.334800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:59.337531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:39:59.342110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:40:01.346171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:40:01.355174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:40:03.360084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:40:03.369615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-991638 -n addons-991638
helpers_test.go:269: (dbg) Run:  kubectl --context addons-991638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-991638 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-991638 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b: exit status 1 (108.973896ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-991638/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:35:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sxbjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m36s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-991638
	  Warning  Failed     3m1s (x4 over 4m35s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    97s (x5 over 4m35s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     97s (x5 over 4m35s)   kubelet            Error: ErrImagePull
	  Warning  Failed     97s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     36s (x15 over 4m34s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    9s (x17 over 4m34s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6vpp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-p6vpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h2p7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z8w27" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nsjx4" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-991638 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-h2p7z ingress-nginx-admission-patch-z8w27 registry-creds-764b6fb674-nsjx4 helper-pod-create-pvc-9d8f803c-6b05-45ed-8140-4a5b9d3fcd6b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.840453503s)
--- FAIL: TestAddons/parallel/LocalPath (345.88s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535239 --alsologtostderr -v=1]
I1002 20:58:32.461562  703895 retry.go:31] will retry after 25.503805691s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:58:57.966588  703895 retry.go:31] will retry after 22.414468907s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535239 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535239 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535239 --alsologtostderr -v=1] stderr:
I1002 20:58:29.795196  761973 out.go:360] Setting OutFile to fd 1 ...
I1002 20:58:29.796977  761973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:58:29.797001  761973 out.go:374] Setting ErrFile to fd 2...
I1002 20:58:29.797007  761973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:58:29.797378  761973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:58:29.797963  761973 mustload.go:65] Loading cluster: functional-535239
I1002 20:58:29.798365  761973 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:58:29.798836  761973 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:58:29.815790  761973 host.go:66] Checking if "functional-535239" exists ...
I1002 20:58:29.816101  761973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:58:29.872811  761973 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:58:29.862786512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:58:29.872943  761973 api_server.go:166] Checking apiserver status ...
I1002 20:58:29.873009  761973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:58:29.873049  761973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:58:29.890454  761973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:58:29.991092  761973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9524/cgroup
I1002 20:58:29.999436  761973 api_server.go:182] apiserver freezer: "10:freezer:/docker/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/kubepods/burstable/pod40293e0d6c0c6d3a12153b6c6db75e58/e25c43ccdf4e3c750bcd1a72954059153815c9908eb5490ea3cdcd1bfd61133d"
I1002 20:58:29.999553  761973 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/kubepods/burstable/pod40293e0d6c0c6d3a12153b6c6db75e58/e25c43ccdf4e3c750bcd1a72954059153815c9908eb5490ea3cdcd1bfd61133d/freezer.state
I1002 20:58:30.017496  761973 api_server.go:204] freezer state: "THAWED"
I1002 20:58:30.017536  761973 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 20:58:30.038298  761973 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 20:58:30.038341  761973 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 20:58:30.038548  761973 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:58:30.038565  761973 addons.go:69] Setting dashboard=true in profile "functional-535239"
I1002 20:58:30.038605  761973 addons.go:238] Setting addon dashboard=true in "functional-535239"
I1002 20:58:30.038647  761973 host.go:66] Checking if "functional-535239" exists ...
I1002 20:58:30.039126  761973 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:58:30.073002  761973 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 20:58:30.075856  761973 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 20:58:30.078778  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 20:58:30.078810  761973 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 20:58:30.078887  761973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:58:30.097588  761973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:58:30.203665  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 20:58:30.203714  761973 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 20:58:30.218320  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 20:58:30.218344  761973 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 20:58:30.232152  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 20:58:30.232206  761973 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 20:58:30.245884  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 20:58:30.245928  761973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 20:58:30.259320  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 20:58:30.259344  761973 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 20:58:30.273069  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 20:58:30.273091  761973 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 20:58:30.288958  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 20:58:30.288983  761973 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 20:58:30.330056  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 20:58:30.330083  761973 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 20:58:30.343643  761973 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:58:30.343667  761973 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 20:58:30.356918  761973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:58:31.178924  761973 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-535239 addons enable metrics-server

                                                
                                                
I1002 20:58:31.181999  761973 addons.go:201] Writing out "functional-535239" config to set dashboard=true...
W1002 20:58:31.182298  761973 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 20:58:31.182948  761973 kapi.go:59] client config for functional-535239: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.key", CAFile:"/home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 20:58:31.183465  761973 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 20:58:31.183487  761973 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 20:58:31.183493  761973 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 20:58:31.183501  761973 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 20:58:31.183506  761973 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 20:58:31.198585  761973 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  140efc13-d219-42be-b244-7b92f16cac19 1210 0 2025-10-02 20:58:31 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 20:58:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.108.56.186,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.108.56.186],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 20:58:31.198748  761973 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 20:58:31.198830  761973 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-535239 proxy --port 36195]
I1002 20:58:31.199093  761973 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 20:58:31.252516  761973 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 20:58:31.252576  761973 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 20:58:31.271928  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[44de56f8-b5d3-48be-8a09-bc4c3de5c120] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x4000109140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000369a40 TLS:<nil>}
I1002 20:58:31.272026  761973 retry.go:31] will retry after 97.82µs: Temporary Error: unexpected response code: 503
I1002 20:58:31.276287  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a9dea20-60ad-4a50-8672-b64cbe0bec4f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40001091c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000369b80 TLS:<nil>}
I1002 20:58:31.276348  761973 retry.go:31] will retry after 191.465µs: Temporary Error: unexpected response code: 503
I1002 20:58:31.286411  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[115c2890-2336-433a-a010-851dbe7bd3d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x4000109240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000369cc0 TLS:<nil>}
I1002 20:58:31.286475  761973 retry.go:31] will retry after 210.078µs: Temporary Error: unexpected response code: 503
I1002 20:58:31.290553  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[33540587-9a7d-4853-ae53-72f1fa508706] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40001092c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000369e00 TLS:<nil>}
I1002 20:58:31.290609  761973 retry.go:31] will retry after 257.264µs: Temporary Error: unexpected response code: 503
I1002 20:58:31.294040  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ebe96363-f1a2-4b72-bbe6-feb3a61aebba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x4000109340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec000 TLS:<nil>}
I1002 20:58:31.294095  761973 retry.go:31] will retry after 430.899µs: Temporary Error: unexpected response code: 503
I1002 20:58:31.297609  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1294f0b5-d195-4fc0-b129-1c7f06ce5f88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40001093c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec140 TLS:<nil>}
I1002 20:58:31.297694  761973 retry.go:31] will retry after 1.117518ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.306931  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8225356-e6ff-4751-a2d8-49208e058963] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034b9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004737c0 TLS:<nil>}
I1002 20:58:31.307031  761973 retry.go:31] will retry after 1.504686ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.318513  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e6f335da-8aa6-4836-b747-b2d33a13ffc6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034ba40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473900 TLS:<nil>}
I1002 20:58:31.318589  761973 retry.go:31] will retry after 2.38237ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.325264  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d182225c-646a-4c6b-b47f-6248eb68d302] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473a40 TLS:<nil>}
I1002 20:58:31.325342  761973 retry.go:31] will retry after 2.484563ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.332056  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f7155091-01e6-4879-8347-19d3c0a02ca2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473b80 TLS:<nil>}
I1002 20:58:31.332150  761973 retry.go:31] will retry after 4.466814ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.340576  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e66c55dc-d7fc-4313-be70-7d3260c5511a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec280 TLS:<nil>}
I1002 20:58:31.340640  761973 retry.go:31] will retry after 4.749555ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.350129  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[489b501c-8e53-4056-a835-b1d18c5d7261] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473cc0 TLS:<nil>}
I1002 20:58:31.350201  761973 retry.go:31] will retry after 4.609047ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.359462  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[78696ceb-b7d1-43b0-bdf5-8d934715fed9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b23c0 TLS:<nil>}
I1002 20:58:31.359530  761973 retry.go:31] will retry after 12.114421ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.376420  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51aa5d7c-0b4b-435c-a7fa-1b4df3675869] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034bdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2500 TLS:<nil>}
I1002 20:58:31.376486  761973 retry.go:31] will retry after 14.477158ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.398884  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6114652c-8720-4f74-8863-05327c1a029c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x400034be40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2640 TLS:<nil>}
I1002 20:58:31.398960  761973 retry.go:31] will retry after 27.733877ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.430364  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2be836a2-5e48-4936-aa46-3f6e1dce5816] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x4000109840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec3c0 TLS:<nil>}
I1002 20:58:31.430427  761973 retry.go:31] will retry after 26.744678ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.460786  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e584452a-1035-4857-a243-5c6cc74484cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40001098c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec500 TLS:<nil>}
I1002 20:58:31.460864  761973 retry.go:31] will retry after 52.580881ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.517341  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d25e85c6-991a-45ce-a886-2d08d6d53f42] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x4000109940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec640 TLS:<nil>}
I1002 20:58:31.517456  761973 retry.go:31] will retry after 136.433938ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.657730  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09a6bc96-1d81-4ac9-b540-7303f1842959] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40005aa040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec780 TLS:<nil>}
I1002 20:58:31.657800  761973 retry.go:31] will retry after 170.058915ms: Temporary Error: unexpected response code: 503
I1002 20:58:31.831080  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e2dcd7e-e2a3-4952-a4a9-fa372c1eafd6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:31 GMT]] Body:0x40005aa0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ec8c0 TLS:<nil>}
I1002 20:58:31.831179  761973 retry.go:31] will retry after 282.481438ms: Temporary Error: unexpected response code: 503
I1002 20:58:32.117778  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9223c6f4-2438-49e6-b5ed-f8571b62a532] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:32 GMT]] Body:0x40005aa140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000eca00 TLS:<nil>}
I1002 20:58:32.117844  761973 retry.go:31] will retry after 291.907453ms: Temporary Error: unexpected response code: 503
I1002 20:58:32.413252  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[215cf825-8a3c-4d84-bca5-4c064d31a9a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:32 GMT]] Body:0x40005aa1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2780 TLS:<nil>}
I1002 20:58:32.413317  761973 retry.go:31] will retry after 505.609017ms: Temporary Error: unexpected response code: 503
I1002 20:58:32.922929  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e53dc6f-2406-483e-8b67-58220602b021] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:32 GMT]] Body:0x4000109b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ecb40 TLS:<nil>}
I1002 20:58:32.923008  761973 retry.go:31] will retry after 725.185411ms: Temporary Error: unexpected response code: 503
I1002 20:58:33.651786  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b768dacc-d587-45d1-996d-8fe31a99ccaf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:33 GMT]] Body:0x4000109c40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b28c0 TLS:<nil>}
I1002 20:58:33.651853  761973 retry.go:31] will retry after 1.665567272s: Temporary Error: unexpected response code: 503
I1002 20:58:35.320656  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0b94cda-3664-4c14-93ed-b2071786304c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:35 GMT]] Body:0x4000109d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2c80 TLS:<nil>}
I1002 20:58:35.320750  761973 retry.go:31] will retry after 2.412634376s: Temporary Error: unexpected response code: 503
I1002 20:58:37.737772  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aade6798-6278-4e3f-93d5-1536779f34ed] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:37 GMT]] Body:0x4000109e00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ecc80 TLS:<nil>}
I1002 20:58:37.737838  761973 retry.go:31] will retry after 2.490302912s: Temporary Error: unexpected response code: 503
I1002 20:58:40.231450  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b18596f9-8c73-4846-9880-1471bf3774df] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:40 GMT]] Body:0x4000109ec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2dc0 TLS:<nil>}
I1002 20:58:40.231529  761973 retry.go:31] will retry after 4.707832544s: Temporary Error: unexpected response code: 503
I1002 20:58:44.942636  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[719f0202-1c5b-4905-8eed-8189fea2bd37] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:44 GMT]] Body:0x40005aa400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ecdc0 TLS:<nil>}
I1002 20:58:44.942701  761973 retry.go:31] will retry after 8.228869646s: Temporary Error: unexpected response code: 503
I1002 20:58:53.175464  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[540dd431-addc-4f24-b568-0cc3120c9cbd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:53 GMT]] Body:0x40007ae280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2f00 TLS:<nil>}
I1002 20:58:53.175527  761973 retry.go:31] will retry after 10.338119808s: Temporary Error: unexpected response code: 503
I1002 20:59:03.517459  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a3ea2fe-69d7-4bea-aa13-f6bf2eae4aa1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:59:03 GMT]] Body:0x40005aa500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ecf00 TLS:<nil>}
I1002 20:59:03.517526  761973 retry.go:31] will retry after 16.844576771s: Temporary Error: unexpected response code: 503
I1002 20:59:20.365470  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7149370a-35d8-40b0-9831-303a2d17d7cf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:59:20 GMT]] Body:0x40005aa5c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ed040 TLS:<nil>}
I1002 20:59:20.365534  761973 retry.go:31] will retry after 25.150113322s: Temporary Error: unexpected response code: 503
I1002 20:59:45.519642  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[302e5ed0-973c-49cf-8403-140367360a92] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:59:45 GMT]] Body:0x40005aa640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3040 TLS:<nil>}
I1002 20:59:45.519708  761973 retry.go:31] will retry after 15.668255265s: Temporary Error: unexpected response code: 503
I1002 21:00:01.191773  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec134249-9353-47ce-918a-9e9e1d69fe5d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:00:01 GMT]] Body:0x40007ae440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ed180 TLS:<nil>}
I1002 21:00:01.191843  761973 retry.go:31] will retry after 53.881074294s: Temporary Error: unexpected response code: 503
I1002 21:00:55.076978  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fdfe6ae0-e893-4e4f-a7d7-33878aec2d2a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:00:55 GMT]] Body:0x40007ae200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ed2c0 TLS:<nil>}
I1002 21:00:55.077051  761973 retry.go:31] will retry after 1m6.897382714s: Temporary Error: unexpected response code: 503
I1002 21:02:01.977894  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bc6ff39-d33d-40b3-b3c3-6c9fac67c127] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:02:01 GMT]] Body:0x40007ae340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ed400 TLS:<nil>}
I1002 21:02:01.977961  761973 retry.go:31] will retry after 55.171110294s: Temporary Error: unexpected response code: 503
I1002 21:02:57.152221  761973 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c6889d2-ce3d-46ef-a44f-90d12ec9378b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:02:57 GMT]] Body:0x40005aa100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40000ed540 TLS:<nil>}
I1002 21:02:57.152294  761973 retry.go:31] will retry after 1m7.543813759s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-535239
helpers_test.go:243: (dbg) docker inspect functional-535239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d",
	        "Created": "2025-10-02T20:50:24.183134624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:50:24.251631866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/hostname",
	        "HostsPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/hosts",
	        "LogPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d-json.log",
	        "Name": "/functional-535239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-535239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-535239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d",
	                "LowerDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-535239",
	                "Source": "/var/lib/docker/volumes/functional-535239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-535239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-535239",
	                "name.minikube.sigs.k8s.io": "functional-535239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf5af4a7b80105780b243ce571b6861491414d5193f40c32670e4eb96107518d",
	            "SandboxKey": "/var/run/docker/netns/cf5af4a7b801",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33543"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-535239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:36:c0:46:ec:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3c79f9b02da7aeeb1af740959dcd7331e5dce7dc26edba67716ac0f4e2e9f15",
	                    "EndpointID": "48e9853503043613aa687abb17f22ba1f9c570458cc93073723bebc89fdb40ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-535239",
	                        "46a1576bc405"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-535239 -n functional-535239
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-535239 logs -n 25: (1.20892561s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-535239 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ssh            │ functional-535239 ssh -- ls -la /mount-9p                                                                          │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ssh            │ functional-535239 ssh sudo umount -f /mount-9p                                                                     │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ mount          │ -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount1 --alsologtostderr -v=1 │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ mount          │ -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount2 --alsologtostderr -v=1 │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ ssh            │ functional-535239 ssh findmnt -T /mount1                                                                           │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ mount          │ -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount3 --alsologtostderr -v=1 │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ ssh            │ functional-535239 ssh findmnt -T /mount1                                                                           │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ssh            │ functional-535239 ssh findmnt -T /mount2                                                                           │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ssh            │ functional-535239 ssh findmnt -T /mount3                                                                           │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ mount          │ -p functional-535239 --kill=true                                                                                   │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ start          │ -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker        │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ start          │ -p functional-535239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ start          │ -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker        │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-535239 --alsologtostderr -v=1                                                     │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ update-context │ functional-535239 update-context --alsologtostderr -v=2                                                            │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ update-context │ functional-535239 update-context --alsologtostderr -v=2                                                            │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ update-context │ functional-535239 update-context --alsologtostderr -v=2                                                            │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ image          │ functional-535239 image ls --format short --alsologtostderr                                                        │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ image          │ functional-535239 image ls --format yaml --alsologtostderr                                                         │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ ssh            │ functional-535239 ssh pgrep buildkitd                                                                              │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │                     │
	│ image          │ functional-535239 image build -t localhost/my-image:functional-535239 testdata/build --alsologtostderr             │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ image          │ functional-535239 image ls                                                                                         │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ image          │ functional-535239 image ls --format json --alsologtostderr                                                         │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ image          │ functional-535239 image ls --format table --alsologtostderr                                                        │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:58:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:58:29.610822  761926 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:58:29.611036  761926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.611064  761926 out.go:374] Setting ErrFile to fd 2...
	I1002 20:58:29.611085  761926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.612131  761926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:58:29.612564  761926 out.go:368] Setting JSON to false
	I1002 20:58:29.613660  761926 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13236,"bootTime":1759425473,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:58:29.613801  761926 start.go:140] virtualization:  
	I1002 20:58:29.616956  761926 out.go:179] * [functional-535239] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 20:58:29.620671  761926 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:58:29.620744  761926 notify.go:220] Checking for updates...
	I1002 20:58:29.626447  761926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:58:29.629379  761926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:58:29.632162  761926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:58:29.634899  761926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:58:29.637680  761926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:58:29.641045  761926 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:58:29.641798  761926 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:58:29.666995  761926 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:58:29.667120  761926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:58:29.726501  761926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:58:29.717160061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:58:29.726614  761926 docker.go:318] overlay module found
	I1002 20:58:29.729870  761926 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 20:58:29.732816  761926 start.go:304] selected driver: docker
	I1002 20:58:29.732833  761926 start.go:924] validating driver "docker" against &{Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:58:29.732940  761926 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:58:29.736572  761926 out.go:203] 
	W1002 20:58:29.739401  761926 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:58:29.742197  761926 out.go:203] 
	
	
	==> Docker <==
	Oct 02 20:58:31 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:58:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64170018daa0f6607a82ebb41dbd46f8cb8bc8938ad271b2a3ed07c4954d6c6d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:58:31 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:58:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b719eb3d2e5b99f42cfbc55304096ba8a27e64553a4973a0f3dc443f5743086b/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:58:31 functional-535239 dockerd[7025]: time="2025-10-02T20:58:31.639386555Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:58:31 functional-535239 dockerd[7025]: time="2025-10-02T20:58:31.729681714Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:58:31 functional-535239 dockerd[7025]: time="2025-10-02T20:58:31.779701576Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:58:31 functional-535239 dockerd[7025]: time="2025-10-02T20:58:31.873813707Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:58:42 functional-535239 dockerd[7025]: time="2025-10-02T20:58:42.900303158Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:58:43 functional-535239 dockerd[7025]: time="2025-10-02T20:58:43.126224063Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:58:43 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:58:43Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 02 20:58:45 functional-535239 dockerd[7025]: time="2025-10-02T20:58:45.891346933Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:58:45 functional-535239 dockerd[7025]: time="2025-10-02T20:58:45.990891015Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:09 functional-535239 dockerd[7025]: time="2025-10-02T20:59:09.898060886Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:59:09 functional-535239 dockerd[7025]: time="2025-10-02T20:59:09.996936474Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:10 functional-535239 dockerd[7025]: time="2025-10-02T20:59:10.055205814Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:59:10 functional-535239 dockerd[7025]: time="2025-10-02T20:59:10.146330361Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:39 functional-535239 dockerd[7025]: time="2025-10-02T20:59:39.073512050Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:40 functional-535239 dockerd[7025]: time="2025-10-02T20:59:40.066065092Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:51 functional-535239 dockerd[7025]: time="2025-10-02T20:59:51.888274220Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:59:51 functional-535239 dockerd[7025]: time="2025-10-02T20:59:51.987581276Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:59:52 functional-535239 dockerd[7025]: time="2025-10-02T20:59:52.043558169Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:59:52 functional-535239 dockerd[7025]: time="2025-10-02T20:59:52.136528149Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:01:23 functional-535239 dockerd[7025]: time="2025-10-02T21:01:23.886418598Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 21:01:23 functional-535239 dockerd[7025]: time="2025-10-02T21:01:23.978011158Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:01:24 functional-535239 dockerd[7025]: time="2025-10-02T21:01:24.897769121Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 21:01:24 functional-535239 dockerd[7025]: time="2025-10-02T21:01:24.998584043Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	685771cac2812       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   8a5336ee26638       busybox-mount                               default
	bed13a4be8f43       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   d1e586bcce3e4       hello-node-connect-7d85dfc575-4lt6r         default
	6bcf0a5c127bd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           9 minutes ago       Running             echo-server               0                   a1943608a6004       hello-node-75c85bcc94-nzng8                 default
	88eb627827a5b       ba04bb24b9575                                                                                         10 minutes ago      Running             storage-provisioner       3                   f45c3a7246ce2       storage-provisioner                         kube-system
	18514364c3f5d       138784d87c9c5                                                                                         10 minutes ago      Running             coredns                   2                   e2ee7241fdb7b       coredns-66bc5c9577-flhsr                    kube-system
	e91d7ad646c67       05baa95f5142d                                                                                         10 minutes ago      Running             kube-proxy                3                   2413c02c12fbd       kube-proxy-bmrx5                            kube-system
	e25c43ccdf4e3       43911e833d64d                                                                                         10 minutes ago      Running             kube-apiserver            0                   cc8fbb1916d60       kube-apiserver-functional-535239            kube-system
	ad29d8f68f353       7eb2c6ff0c5a7                                                                                         10 minutes ago      Running             kube-controller-manager   2                   78b70206d47ab       kube-controller-manager-functional-535239   kube-system
	7cd4763f35104       a1894772a478e                                                                                         10 minutes ago      Running             etcd                      2                   2587df64c0cfc       etcd-functional-535239                      kube-system
	653466d13f9c5       b5f57ec6b9867                                                                                         10 minutes ago      Running             kube-scheduler            3                   ca63a4d8b6392       kube-scheduler-functional-535239            kube-system
	187fd4e1e6097       05baa95f5142d                                                                                         10 minutes ago      Exited              kube-proxy                2                   81302a06478d5       kube-proxy-bmrx5                            kube-system
	7a3c81552efc6       b5f57ec6b9867                                                                                         10 minutes ago      Exited              kube-scheduler            2                   b8a6333fb24e1       kube-scheduler-functional-535239            kube-system
	622ea6d704c96       ba04bb24b9575                                                                                         10 minutes ago      Exited              storage-provisioner       2                   88bb88ebdacf1       storage-provisioner                         kube-system
	798fdb462df5b       138784d87c9c5                                                                                         11 minutes ago      Exited              coredns                   1                   f7036054826c1       coredns-66bc5c9577-flhsr                    kube-system
	aa21be7130bc0       a1894772a478e                                                                                         11 minutes ago      Exited              etcd                      1                   2ef6ea3b15ba5       etcd-functional-535239                      kube-system
	bb5924326d81d       7eb2c6ff0c5a7                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   f4bfe603591ee       kube-controller-manager-functional-535239   kube-system
	
	
	==> coredns [18514364c3f5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54141 - 40596 "HINFO IN 1466766216576499033.4307511153640274082. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023174456s
	
	
	==> coredns [798fdb462df5] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58342 - 31472 "HINFO IN 1148319212355416395.5702211502949280260. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035875182s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-535239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-535239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-535239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_50_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:50:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-535239
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:03:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:59:49 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:59:49 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:59:49 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:59:49 +0000   Thu, 02 Oct 2025 20:50:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-535239
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b06b61a93cf498296353568717f7c62
	  System UUID:                5a18d05d-5bd3-4799-867c-92b5e4e37cc0
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-nzng8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  default                     hello-node-connect-7d85dfc575-4lt6r           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 coredns-66bc5c9577-flhsr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-535239                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-535239              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-535239     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bmrx5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-535239              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vbd2c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2vwc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-535239 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-535239 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-535239 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeReady                12m                kubelet          Node functional-535239 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	  Warning  ContainerGCFailed        10m (x2 over 11m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-535239 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-535239 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-535239 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7cd4763f3510] <==
	{"level":"warn","ts":"2025-10-02T20:53:22.203236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.218149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.234663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.253547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.269269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.287015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.311985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.338929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.365508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.378031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.396375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.422820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.444837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.472807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.474427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.488843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.503131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.524124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.544446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.560279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.578324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.650474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52512","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:03:21.395309Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2025-10-02T21:03:21.421000Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1145,"took":"25.236234ms","hash":266896530,"current-db-size-bytes":3665920,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1875968,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-02T21:03:21.421053Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":266896530,"revision":1145,"compact-revision":-1}
	
	
	==> etcd [aa21be7130bc] <==
	{"level":"warn","ts":"2025-10-02T20:52:20.982625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.004884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.022861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.084677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.120556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.146218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.201672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53312","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:53:01.076004Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:53:01.076069Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-535239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T20:53:01.076192Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:53:01.076250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:53:08.078726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.079024Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T20:53:08.082931Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:53:08.083038Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084432Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084550Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:53:08.084601Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084738Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084786Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:53:08.084825Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.087609Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T20:53:08.087868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.087907Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T20:53:08.087917Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-535239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 21:03:31 up  3:45,  0 user,  load average: 0.28, 0.48, 1.07
	Linux functional-535239 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e25c43ccdf4e] <==
	I1002 20:53:23.438469       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 20:53:23.438672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:53:23.446352       1 aggregator.go:171] initial CRD sync complete...
	I1002 20:53:23.446382       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 20:53:23.446389       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 20:53:23.446395       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:53:23.446775       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 20:53:23.473386       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:53:23.913742       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:53:24.183139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:53:25.061865       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:53:25.105287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:53:25.145660       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:53:25.156606       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:53:26.971366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 20:53:27.022350       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:53:27.076329       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:53:38.942910       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.108.86"}
	I1002 20:53:49.222100       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.175.100"}
	I1002 20:53:52.116583       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.60.50"}
	I1002 20:58:07.601363       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.237.103"}
	I1002 20:58:30.795989       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 20:58:31.130751       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.56.186"}
	I1002 20:58:31.167410       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.5.85"}
	I1002 21:03:23.299951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ad29d8f68f35] <==
	I1002 20:53:26.698326       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:53:26.698374       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:53:26.698426       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:53:26.702779       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 20:53:26.708984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:53:26.709059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:53:26.710213       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:53:26.710293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:53:26.710342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:53:26.713102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:53:26.713131       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:53:26.713140       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:53:26.713209       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:53:26.713525       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:53:26.714383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:53:26.714694       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:53:26.722354       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:53:26.724019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:53:26.744701       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1002 20:58:30.894710       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:58:30.914363       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:58:30.930237       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:58:30.939991       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:58:30.940376       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:58:30.951253       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bb5924326d81] <==
	I1002 20:52:25.552261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:52:25.552410       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-535239"
	I1002 20:52:25.552506       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:52:25.554800       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 20:52:25.557550       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:52:25.564041       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:52:25.566890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:52:25.569567       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 20:52:25.573526       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 20:52:25.577887       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 20:52:25.581215       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:52:25.583602       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:52:25.583851       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 20:52:25.583978       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:52:25.584108       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:52:25.584251       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:52:25.584377       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:52:25.584380       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 20:52:25.584602       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:52:25.584939       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:52:25.585356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:52:25.586423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:52:25.587653       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:52:25.592926       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:52:25.598403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [187fd4e1e609] <==
	
	
	==> kube-proxy [e91d7ad646c6] <==
	I1002 20:53:24.664468       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:53:24.763000       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:53:24.864178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:53:24.864261       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:53:24.864389       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:53:24.908668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:53:24.908727       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:53:24.926515       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:53:24.927129       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:53:24.927145       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:53:24.932266       1 config.go:200] "Starting service config controller"
	I1002 20:53:24.932286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:53:24.932310       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:53:24.932314       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:53:24.932324       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:53:24.932328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:53:24.934894       1 config.go:309] "Starting node config controller"
	I1002 20:53:24.934915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:53:24.934921       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:53:25.033228       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:53:25.033269       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:53:25.033240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [653466d13f9c] <==
	I1002 20:53:20.926449       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:53:23.289778       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:53:23.289820       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:53:23.289830       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:53:23.291621       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:53:23.332624       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:53:23.332870       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:53:23.354050       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:53:23.354906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:53:23.355038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:53:23.357656       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:53:23.457759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7a3c81552efc] <==
	I1002 20:53:14.775559       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 02 21:02:14 functional-535239 kubelet[9199]: E1002 21:02:14.844735    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:02:16 functional-535239 kubelet[9199]: E1002 21:02:16.846389    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:02:20 functional-535239 kubelet[9199]: E1002 21:02:20.846938    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:02:21 functional-535239 kubelet[9199]: E1002 21:02:21.845931    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	Oct 02 21:02:26 functional-535239 kubelet[9199]: E1002 21:02:26.843973    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:02:27 functional-535239 kubelet[9199]: E1002 21:02:27.845332    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:02:31 functional-535239 kubelet[9199]: E1002 21:02:31.846828    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:02:36 functional-535239 kubelet[9199]: E1002 21:02:36.846462    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	Oct 02 21:02:38 functional-535239 kubelet[9199]: E1002 21:02:38.848255    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:02:40 functional-535239 kubelet[9199]: E1002 21:02:40.844328    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:02:42 functional-535239 kubelet[9199]: E1002 21:02:42.849266    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:02:49 functional-535239 kubelet[9199]: E1002 21:02:49.846060    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	Oct 02 21:02:51 functional-535239 kubelet[9199]: E1002 21:02:51.844241    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:02:52 functional-535239 kubelet[9199]: E1002 21:02:52.847219    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:02:57 functional-535239 kubelet[9199]: E1002 21:02:57.845986    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:03:04 functional-535239 kubelet[9199]: E1002 21:03:04.845574    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	Oct 02 21:03:04 functional-535239 kubelet[9199]: E1002 21:03:04.850092    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:03:06 functional-535239 kubelet[9199]: E1002 21:03:06.844370    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:03:12 functional-535239 kubelet[9199]: E1002 21:03:12.846485    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:03:15 functional-535239 kubelet[9199]: E1002 21:03:15.846242    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:03:16 functional-535239 kubelet[9199]: E1002 21:03:16.846507    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	Oct 02 21:03:20 functional-535239 kubelet[9199]: E1002 21:03:20.844291    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 21:03:23 functional-535239 kubelet[9199]: E1002 21:03:23.846171    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 21:03:26 functional-535239 kubelet[9199]: E1002 21:03:26.848414    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vbd2c" podUID="3a8de19e-47a2-4b3b-bac9-bd7e2cd3774e"
	Oct 02 21:03:28 functional-535239 kubelet[9199]: E1002 21:03:28.846968    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2vwc4" podUID="0a8c6bfa-b9eb-4659-b602-8e8cd640b34b"
	
	
	==> storage-provisioner [622ea6d704c9] <==
	I1002 20:52:34.830895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:52:34.847760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:52:34.848081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 20:52:34.850806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:38.305518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:42.566358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:46.165219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:49.218836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.240986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.246186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:52:52.246418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:52:52.246602       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a!
	I1002 20:52:52.247484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7083b156-288c-4c5f-bb0f-9016da234852", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a became leader
	W1002 20:52:52.253374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.256450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:52:52.347725       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a!
	W1002 20:52:54.259164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:54.263683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:56.267195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:56.274152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:58.276880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:58.281986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:00.286393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:00.295461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88eb627827a5] <==
	W1002 21:03:06.712422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:08.716241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:08.723126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:10.726457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:10.731210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:12.734358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:12.739250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:14.742503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:14.747134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:16.750805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:16.757604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:18.761534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:18.765750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:20.768860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:20.775927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:22.779200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:22.783669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:24.787151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:24.791751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:26.794359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:26.798975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:28.802934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:28.807532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:30.810767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:03:30.817709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535239 -n functional-535239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-535239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vbd2c kubernetes-dashboard-855c9754f9-2vwc4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-535239 describe pod busybox-mount nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vbd2c kubernetes-dashboard-855c9754f9-2vwc4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-535239 describe pod busybox-mount nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vbd2c kubernetes-dashboard-855c9754f9-2vwc4: exit status 1 (119.077092ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535239/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:58:18 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://685771cac281236704ea598002fa40b5234567396dcc5244b18659f707055c54
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 20:58:21 +0000
	      Finished:     Thu, 02 Oct 2025 20:58:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwb7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rwb7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-535239
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.182s (2.182s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535239/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:53:52 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8c7n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8c7n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m39s                   default-scheduler  Successfully assigned default/nginx-svc to functional-535239
	  Warning  Failed     8m11s (x3 over 9m26s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m43s (x5 over 9m40s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m42s (x2 over 9m40s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m42s (x5 over 9m40s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m32s (x21 over 9m40s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m32s (x21 over 9m40s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535239/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:54:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnsv7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jnsv7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m27s                   default-scheduler  Successfully assigned default/sp-pod to functional-535239
	  Normal   Pulling    6m37s (x5 over 9m27s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m36s (x5 over 9m27s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m36s (x5 over 9m27s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m17s (x21 over 9m26s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m17s (x21 over 9m26s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vbd2c" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2vwc4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-535239 describe pod busybox-mount nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vbd2c kubernetes-dashboard-855c9754f9-2vwc4: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.34s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (248.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e65464e1-c943-4af2-a41f-40afeb087995] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003020307s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-535239 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-535239 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-535239 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-535239 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [77879ce4-3efb-4271-aec1-7fa1f7e941a0] Pending
helpers_test.go:352: "sp-pod" [77879ce4-3efb-4271-aec1-7fa1f7e941a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1002 20:55:55.463937  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:56:23.166794  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535239 -n functional-535239
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 20:58:04.962359259 +0000 UTC m=+1822.130536861
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-535239 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-535239 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-535239/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:54:04 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnsv7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-jnsv7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-535239
Normal   Pulling    70s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     69s (x5 over 4m)     kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     69s (x5 over 4m)     kubelet            Error: ErrImagePull
Normal   BackOff    6s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     6s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-535239 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-535239 logs sp-pod -n default: exit status 1 (87.568707ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-535239 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-535239
helpers_test.go:243: (dbg) docker inspect functional-535239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d",
	        "Created": "2025-10-02T20:50:24.183134624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:50:24.251631866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/hostname",
	        "HostsPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/hosts",
	        "LogPath": "/var/lib/docker/containers/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d/46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d-json.log",
	        "Name": "/functional-535239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-535239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-535239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46a1576bc405b93e50995b2c550f8e992c97167b4f4a5476df5205af01aa159d",
	                "LowerDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f-init/diff:/var/lib/docker/overlay2/3c380b0850506122817bc2917299dd60fe15a32ab35b7debe4519f1f9045f4d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0999c4ba4309cf82ed5ef0f2212d787d521f03b4e7da9bc1a3490dae03f68f1f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-535239",
	                "Source": "/var/lib/docker/volumes/functional-535239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-535239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-535239",
	                "name.minikube.sigs.k8s.io": "functional-535239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf5af4a7b80105780b243ce571b6861491414d5193f40c32670e4eb96107518d",
	            "SandboxKey": "/var/run/docker/netns/cf5af4a7b801",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33543"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-535239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:36:c0:46:ec:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3c79f9b02da7aeeb1af740959dcd7331e5dce7dc26edba67716ac0f4e2e9f15",
	                    "EndpointID": "48e9853503043613aa687abb17f22ba1f9c570458cc93073723bebc89fdb40ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-535239",
	                        "46a1576bc405"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-535239 -n functional-535239
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-535239 logs -n 25: (1.151865521s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-535239 ssh sudo cat /usr/share/ca-certificates/703895.pem                                                                                        │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image ls                                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                    │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image load --daemon kicbase/echo-server:functional-535239 --alsologtostderr                                                               │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh sudo cat /etc/ssl/certs/7038952.pem                                                                                                   │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh sudo cat /usr/share/ca-certificates/7038952.pem                                                                                       │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image ls                                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image save kicbase/echo-server:functional-535239 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh sudo cat /etc/test/nested/copy/703895/hosts                                                                                           │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image rm kicbase/echo-server:functional-535239 --alsologtostderr                                                                          │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image ls                                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image ls                                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ image   │ functional-535239 image save --daemon kicbase/echo-server:functional-535239 --alsologtostderr                                                               │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh echo hello                                                                                                                            │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ ssh     │ functional-535239 ssh cat /etc/hostname                                                                                                                     │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ tunnel  │ functional-535239 tunnel --alsologtostderr                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │                     │
	│ tunnel  │ functional-535239 tunnel --alsologtostderr                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │                     │
	│ tunnel  │ functional-535239 tunnel --alsologtostderr                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │                     │
	│ service │ functional-535239 service list                                                                                                                              │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ service │ functional-535239 service list -o json                                                                                                                      │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ service │ functional-535239 service --namespace=default --https --url hello-node                                                                                      │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ service │ functional-535239 service hello-node --url --format={{.IP}}                                                                                                 │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	│ service │ functional-535239 service hello-node --url                                                                                                                  │ functional-535239 │ jenkins │ v1.37.0 │ 02 Oct 25 20:53 UTC │ 02 Oct 25 20:53 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:42.119101  750191 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:42.119300  750191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:42.119305  750191 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:42.119309  750191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:42.119588  750191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:52:42.120034  750191 out.go:368] Setting JSON to false
	I1002 20:52:42.121216  750191 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12889,"bootTime":1759425473,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:52:42.121285  750191 start.go:140] virtualization:  
	I1002 20:52:42.125704  750191 out.go:179] * [functional-535239] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:52:42.129943  750191 notify.go:220] Checking for updates...
	I1002 20:52:42.133003  750191 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:52:42.136098  750191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:42.138966  750191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:52:42.142012  750191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:52:42.144903  750191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:52:42.148256  750191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:42.153108  750191 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:52:42.153293  750191 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:52:42.192234  750191 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:52:42.192379  750191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:42.289772  750191 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:52:42.277734479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:42.289873  750191 docker.go:318] overlay module found
	I1002 20:52:42.295305  750191 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:42.298248  750191 start.go:304] selected driver: docker
	I1002 20:52:42.298260  750191 start.go:924] validating driver "docker" against &{Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:42.298411  750191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:42.298531  750191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:42.363087  750191 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:52:42.353764064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:42.363585  750191 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:42.363612  750191 cni.go:84] Creating CNI manager for ""
	I1002 20:52:42.363676  750191 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:52:42.363725  750191 start.go:348] cluster config:
	{Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:42.366968  750191 out.go:179] * Starting "functional-535239" primary control-plane node in "functional-535239" cluster
	I1002 20:52:42.369634  750191 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:52:42.372496  750191 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:42.375323  750191 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:52:42.375403  750191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:42.375470  750191 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:52:42.375518  750191 cache.go:58] Caching tarball of preloaded images
	I1002 20:52:42.375620  750191 preload.go:233] Found /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:52:42.375630  750191 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 20:52:42.375739  750191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/config.json ...
	I1002 20:52:42.395609  750191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:42.395621  750191 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:42.395634  750191 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:52:42.395659  750191 start.go:360] acquireMachinesLock for functional-535239: {Name:mk8f0e348f60c4cc568bf8fd0bdf7fca5fe71595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:42.395715  750191 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "functional-535239"
	I1002 20:52:42.395735  750191 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:52:42.395747  750191 fix.go:54] fixHost starting: 
	I1002 20:52:42.396002  750191 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
	I1002 20:52:42.413769  750191 fix.go:112] recreateIfNeeded on functional-535239: state=Running err=<nil>
	W1002 20:52:42.413795  750191 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:52:42.417119  750191 out.go:252] * Updating the running docker "functional-535239" container ...
	I1002 20:52:42.417145  750191 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:42.417253  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:42.434195  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:42.434507  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:42.434514  750191 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:42.569979  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-535239
	
	I1002 20:52:42.569993  750191 ubuntu.go:182] provisioning hostname "functional-535239"
	I1002 20:52:42.570066  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:42.588322  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:42.588638  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:42.588656  750191 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-535239 && echo "functional-535239" | sudo tee /etc/hostname
	I1002 20:52:42.737107  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-535239
	
	I1002 20:52:42.737192  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:42.757569  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:42.757882  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:42.757897  750191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-535239' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-535239/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-535239' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:42.893728  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:42.893744  750191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-702037/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-702037/.minikube}
	I1002 20:52:42.893761  750191 ubuntu.go:190] setting up certificates
	I1002 20:52:42.893771  750191 provision.go:84] configureAuth start
	I1002 20:52:42.893832  750191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-535239
	I1002 20:52:42.911588  750191 provision.go:143] copyHostCerts
	I1002 20:52:42.911647  750191 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem, removing ...
	I1002 20:52:42.911689  750191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem
	I1002 20:52:42.911814  750191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/cert.pem (1123 bytes)
	I1002 20:52:42.911928  750191 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem, removing ...
	I1002 20:52:42.911933  750191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem
	I1002 20:52:42.911962  750191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/key.pem (1675 bytes)
	I1002 20:52:42.912021  750191 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem, removing ...
	I1002 20:52:42.912025  750191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem
	I1002 20:52:42.912047  750191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-702037/.minikube/ca.pem (1078 bytes)
	I1002 20:52:42.912100  750191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem org=jenkins.functional-535239 san=[127.0.0.1 192.168.49.2 functional-535239 localhost minikube]
	I1002 20:52:43.127577  750191 provision.go:177] copyRemoteCerts
	I1002 20:52:43.127630  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:43.127676  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.147941  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:52:43.250192  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:43.268153  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:52:43.286737  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:52:43.305522  750191 provision.go:87] duration metric: took 411.726561ms to configureAuth
	I1002 20:52:43.305539  750191 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:43.305739  750191 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:52:43.305798  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.322906  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:43.323205  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:43.323212  750191 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 20:52:43.462482  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 20:52:43.462491  750191 ubuntu.go:71] root file system type: overlay
	I1002 20:52:43.462639  750191 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 20:52:43.462703  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.480712  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:43.481008  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:43.481087  750191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 20:52:43.626880  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 20:52:43.626965  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.648958  750191 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:43.649251  750191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1002 20:52:43.649266  750191 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 20:52:43.790007  750191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:43.790022  750191 machine.go:96] duration metric: took 1.37287006s to provisionDockerMachine
	I1002 20:52:43.790044  750191 start.go:293] postStartSetup for "functional-535239" (driver="docker")
	I1002 20:52:43.790054  750191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:43.790127  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:43.790167  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.809495  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:52:43.905977  750191 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:43.909410  750191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:43.909463  750191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:43.909472  750191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/addons for local assets ...
	I1002 20:52:43.909529  750191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-702037/.minikube/files for local assets ...
	I1002 20:52:43.909608  750191 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/ssl/certs/7038952.pem -> 7038952.pem in /etc/ssl/certs
	I1002 20:52:43.909687  750191 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/test/nested/copy/703895/hosts -> hosts in /etc/test/nested/copy/703895
	I1002 20:52:43.909736  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/703895
	I1002 20:52:43.917620  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/ssl/certs/7038952.pem --> /etc/ssl/certs/7038952.pem (1708 bytes)
	I1002 20:52:43.935798  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/test/nested/copy/703895/hosts --> /etc/test/nested/copy/703895/hosts (40 bytes)
	I1002 20:52:43.954424  750191 start.go:296] duration metric: took 164.36033ms for postStartSetup
	I1002 20:52:43.954497  750191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:43.954535  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:43.971569  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:52:44.067124  750191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:44.072424  750191 fix.go:56] duration metric: took 1.676677005s for fixHost
	I1002 20:52:44.072451  750191 start.go:83] releasing machines lock for "functional-535239", held for 1.676729715s
	I1002 20:52:44.072531  750191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-535239
	I1002 20:52:44.090811  750191 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:44.090838  750191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:44.090864  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:44.090900  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:52:44.112039  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:52:44.114319  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:52:44.205790  750191 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:44.300292  750191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:44.304630  750191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:44.304687  750191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:44.312302  750191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:44.312318  750191 start.go:495] detecting cgroup driver to use...
	I1002 20:52:44.312348  750191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:52:44.312462  750191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:44.326326  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:52:44.336686  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:52:44.346573  750191 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:52:44.346648  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:52:44.355672  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:52:44.364660  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:52:44.373455  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:52:44.383297  750191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:44.391531  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:52:44.401095  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:52:44.410231  750191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:52:44.419802  750191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:44.427526  750191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:44.435436  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:44.600657  750191 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:52:44.828541  750191 start.go:495] detecting cgroup driver to use...
	I1002 20:52:44.828577  750191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:52:44.828626  750191 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 20:52:44.844312  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:44.868529  750191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:44.904109  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:44.917147  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:52:44.930569  750191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:44.946524  750191 ssh_runner.go:195] Run: which cri-dockerd
	I1002 20:52:44.950220  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 20:52:44.957503  750191 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 20:52:44.975493  750191 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 20:52:45.153365  750191 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 20:52:45.348181  750191 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 20:52:45.348286  750191 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 20:52:45.365481  750191 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 20:52:45.381270  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:45.540955  750191 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 20:53:11.627431  750191 ssh_runner.go:235] Completed: sudo systemctl restart docker: (26.086453863s)
	I1002 20:53:11.627496  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:53:11.643034  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 20:53:11.661471  750191 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1002 20:53:11.686719  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:53:11.703397  750191 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 20:53:11.832593  750191 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 20:53:11.954496  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:53:12.079710  750191 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 20:53:12.096157  750191 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 20:53:12.109548  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:53:12.224675  750191 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 20:53:12.303010  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 20:53:12.317362  750191 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 20:53:12.317444  750191 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 20:53:12.321218  750191 start.go:563] Will wait 60s for crictl version
	I1002 20:53:12.321275  750191 ssh_runner.go:195] Run: which crictl
	I1002 20:53:12.325013  750191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:53:12.354351  750191 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 20:53:12.354416  750191 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:53:12.377399  750191 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 20:53:12.403473  750191 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 20:53:12.403570  750191 cli_runner.go:164] Run: docker network inspect functional-535239 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:53:12.420412  750191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:53:12.427271  750191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:53:12.429949  750191 kubeadm.go:883] updating cluster {Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:53:12.430116  750191 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:53:12.430217  750191 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:53:12.449301  750191 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-535239
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 20:53:12.449312  750191 docker.go:621] Images already preloaded, skipping extraction
	I1002 20:53:12.449397  750191 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:53:12.469473  750191 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-535239
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 20:53:12.469488  750191 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:53:12.469497  750191 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 docker true true} ...
	I1002 20:53:12.469615  750191 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-535239 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:53:12.469687  750191 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 20:53:12.521056  750191 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:53:12.521126  750191 cni.go:84] Creating CNI manager for ""
	I1002 20:53:12.521148  750191 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:53:12.521155  750191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:53:12.521180  750191 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-535239 NodeName:functional-535239 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:53:12.521308  750191 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-535239"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:53:12.521374  750191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:53:12.529224  750191 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:53:12.529285  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:53:12.537076  750191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1002 20:53:12.550220  750191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:53:12.563479  750191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I1002 20:53:12.576034  750191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:53:12.579660  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:53:12.701147  750191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:53:12.715754  750191 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239 for IP: 192.168.49.2
	I1002 20:53:12.715763  750191 certs.go:195] generating shared ca certs ...
	I1002 20:53:12.715778  750191 certs.go:227] acquiring lock for ca certs: {Name:mk80feb87d46a3c61de00b383dd8ac7fd2e19095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:53:12.715914  750191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key
	I1002 20:53:12.715951  750191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key
	I1002 20:53:12.715957  750191 certs.go:257] generating profile certs ...
	I1002 20:53:12.716038  750191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.key
	I1002 20:53:12.716082  750191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/apiserver.key.3e9dc728
	I1002 20:53:12.716122  750191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/proxy-client.key
	I1002 20:53:12.716228  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/703895.pem (1338 bytes)
	W1002 20:53:12.716255  750191 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-702037/.minikube/certs/703895_empty.pem, impossibly tiny 0 bytes
	I1002 20:53:12.716262  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:53:12.716285  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:53:12.716307  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:53:12.716325  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/certs/key.pem (1675 bytes)
	I1002 20:53:12.716363  750191 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/ssl/certs/7038952.pem (1708 bytes)
	I1002 20:53:12.716934  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:53:12.736777  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:53:12.762938  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:53:12.782869  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:53:12.809791  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:53:12.831998  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:53:12.852918  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:53:12.892291  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:53:12.956502  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/certs/703895.pem --> /usr/share/ca-certificates/703895.pem (1338 bytes)
	I1002 20:53:13.003533  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/ssl/certs/7038952.pem --> /usr/share/ca-certificates/7038952.pem (1708 bytes)
	I1002 20:53:13.065322  750191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-702037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:53:13.098406  750191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:53:13.118067  750191 ssh_runner.go:195] Run: openssl version
	I1002 20:53:13.125997  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:53:13.137718  750191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:53:13.148951  750191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:53:13.149009  750191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:53:13.212407  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:53:13.232702  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703895.pem && ln -fs /usr/share/ca-certificates/703895.pem /etc/ssl/certs/703895.pem"
	I1002 20:53:13.251918  750191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703895.pem
	I1002 20:53:13.256535  750191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:50 /usr/share/ca-certificates/703895.pem
	I1002 20:53:13.256590  750191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703895.pem
	I1002 20:53:13.318126  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703895.pem /etc/ssl/certs/51391683.0"
	I1002 20:53:13.331753  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7038952.pem && ln -fs /usr/share/ca-certificates/7038952.pem /etc/ssl/certs/7038952.pem"
	I1002 20:53:13.347336  750191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7038952.pem
	I1002 20:53:13.353900  750191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:50 /usr/share/ca-certificates/7038952.pem
	I1002 20:53:13.353963  750191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7038952.pem
	I1002 20:53:13.461113  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7038952.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:53:13.484446  750191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:53:13.491420  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:53:13.562110  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:53:13.682832  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:53:13.764415  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:53:13.816400  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:53:13.868511  750191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:53:13.936859  750191 kubeadm.go:400] StartCluster: {Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:53:13.936987  750191 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:53:13.990998  750191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:53:14.000365  750191 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:53:14.000385  750191 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:53:14.000439  750191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:53:14.011862  750191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:53:14.012408  750191 kubeconfig.go:125] found "functional-535239" server: "https://192.168.49.2:8441"
	I1002 20:53:14.014030  750191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:53:14.032300  750191 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:50:34.697041181 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:53:12.570788661 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:53:14.032794  750191 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:53:14.032859  750191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 20:53:14.076433  750191 docker.go:484] Stopping containers: [187fd4e1e609 7a3c81552efc 75b84ad57409 81302a06478d 0da1cecdeb04 5cbadbc81f7a b8a6333fb24e c9e6f4706d9c 622ea6d704c9 798fdb462df5 283f481a754c aa21be7130bc d5d397dcbd93 bb5924326d81 68c1d2954148 f7036054826c c6ea8bddb5d8 2ef6ea3b15ba e2c6097b9495 f4bfe603591e 88bb88ebdacf fad035d87522 27615c3108e8 2093f7594ce1 a45d18270d2d a0d37d74d19a f6fd2641bb6f 43491e5b19f7 55b2208aca5e 3ccd3e7cc316 89dddd5ba052 a9b41d4dd609]
	I1002 20:53:14.076526  750191 ssh_runner.go:195] Run: docker stop 187fd4e1e609 7a3c81552efc 75b84ad57409 81302a06478d 0da1cecdeb04 5cbadbc81f7a b8a6333fb24e c9e6f4706d9c 622ea6d704c9 798fdb462df5 283f481a754c aa21be7130bc d5d397dcbd93 bb5924326d81 68c1d2954148 f7036054826c c6ea8bddb5d8 2ef6ea3b15ba e2c6097b9495 f4bfe603591e 88bb88ebdacf fad035d87522 27615c3108e8 2093f7594ce1 a45d18270d2d a0d37d74d19a f6fd2641bb6f 43491e5b19f7 55b2208aca5e 3ccd3e7cc316 89dddd5ba052 a9b41d4dd609
	I1002 20:53:15.605816  750191 ssh_runner.go:235] Completed: docker stop 187fd4e1e609 7a3c81552efc 75b84ad57409 81302a06478d 0da1cecdeb04 5cbadbc81f7a b8a6333fb24e c9e6f4706d9c 622ea6d704c9 798fdb462df5 283f481a754c aa21be7130bc d5d397dcbd93 bb5924326d81 68c1d2954148 f7036054826c c6ea8bddb5d8 2ef6ea3b15ba e2c6097b9495 f4bfe603591e 88bb88ebdacf fad035d87522 27615c3108e8 2093f7594ce1 a45d18270d2d a0d37d74d19a f6fd2641bb6f 43491e5b19f7 55b2208aca5e 3ccd3e7cc316 89dddd5ba052 a9b41d4dd609: (1.529260759s)
	I1002 20:53:15.605883  750191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:53:15.765404  750191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:53:15.774479  750191 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 20:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 20:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:50 /etc/kubernetes/scheduler.conf
	
	I1002 20:53:15.774537  750191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:53:15.785525  750191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:53:15.793919  750191 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:53:15.793972  750191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:53:15.803885  750191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:53:15.812783  750191 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:53:15.812837  750191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:53:15.825111  750191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:53:15.834469  750191 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:53:15.834520  750191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:53:15.852087  750191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:53:15.861872  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:15.993790  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:18.463283  750191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.469468348s)
	I1002 20:53:18.463347  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:18.685976  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:18.746132  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:18.812659  750191 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:53:18.812729  750191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:53:19.313722  750191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:53:19.812994  750191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:53:20.312849  750191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:53:20.354730  750191 api_server.go:72] duration metric: took 1.542081924s to wait for apiserver process to appear ...
	I1002 20:53:20.354744  750191 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:53:20.354762  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:23.243769  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:53:23.243789  750191 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:53:23.243801  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:23.462529  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:53:23.462571  750191 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:53:23.462584  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:23.478451  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:53:23.478466  750191 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:53:23.854879  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:23.863491  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:53:23.863509  750191 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:53:24.355043  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:24.364314  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:53:24.364347  750191 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:53:24.854875  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:24.864163  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:53:24.879529  750191 api_server.go:141] control plane version: v1.34.1
	I1002 20:53:24.879545  750191 api_server.go:131] duration metric: took 4.524795885s to wait for apiserver health ...
	I1002 20:53:24.879553  750191 cni.go:84] Creating CNI manager for ""
	I1002 20:53:24.879563  750191 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:53:24.882998  750191 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:53:24.885995  750191 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:53:24.897170  750191 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:53:24.921504  750191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:53:24.925019  750191 system_pods.go:59] 7 kube-system pods found
	I1002 20:53:24.925039  750191 system_pods.go:61] "coredns-66bc5c9577-flhsr" [fb0d03c1-5814-435e-a45c-1cc06380b348] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:53:24.925046  750191 system_pods.go:61] "etcd-functional-535239" [47de3ad4-98da-4b1d-8251-802155975278] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:53:24.925075  750191 system_pods.go:61] "kube-apiserver-functional-535239" [5b29bca3-f7a6-4ef8-9a3c-c5c99ace8d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:53:24.925081  750191 system_pods.go:61] "kube-controller-manager-functional-535239" [3857bac9-b342-4545-8e11-c5fcb40a8dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:53:24.925087  750191 system_pods.go:61] "kube-proxy-bmrx5" [c51722b6-a375-4e82-a8db-2654b3ed6200] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 20:53:24.925093  750191 system_pods.go:61] "kube-scheduler-functional-535239" [143296b6-e81c-4bf2-8ce9-b576920e690b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:53:24.925100  750191 system_pods.go:61] "storage-provisioner" [e65464e1-c943-4af2-a41f-40afeb087995] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:53:24.925104  750191 system_pods.go:74] duration metric: took 3.591375ms to wait for pod list to return data ...
	I1002 20:53:24.925111  750191 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:53:24.927711  750191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:53:24.927731  750191 node_conditions.go:123] node cpu capacity is 2
	I1002 20:53:24.927742  750191 node_conditions.go:105] duration metric: took 2.627123ms to run NodePressure ...
	I1002 20:53:24.927795  750191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:53:25.201883  750191 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:53:25.205530  750191 kubeadm.go:743] kubelet initialised
	I1002 20:53:25.205541  750191 kubeadm.go:744] duration metric: took 3.646319ms waiting for restarted kubelet to initialise ...
	I1002 20:53:25.205560  750191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:53:25.218622  750191 ops.go:34] apiserver oom_adj: -16
	I1002 20:53:25.218641  750191 kubeadm.go:601] duration metric: took 11.218250716s to restartPrimaryControlPlane
	I1002 20:53:25.218650  750191 kubeadm.go:402] duration metric: took 11.281799386s to StartCluster
	I1002 20:53:25.218680  750191 settings.go:142] acquiring lock: {Name:mk05279472feb5063a5c2285eba6fd6d59490060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:53:25.218762  750191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:53:25.219527  750191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-702037/kubeconfig: {Name:mk451cd073bc3a44904ff8d0351225145e56e5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:53:25.219775  750191 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:53:25.220124  750191 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:53:25.220174  750191 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:53:25.220267  750191 addons.go:69] Setting storage-provisioner=true in profile "functional-535239"
	I1002 20:53:25.220279  750191 addons.go:238] Setting addon storage-provisioner=true in "functional-535239"
	W1002 20:53:25.220284  750191 addons.go:247] addon storage-provisioner should already be in state true
	I1002 20:53:25.220307  750191 host.go:66] Checking if "functional-535239" exists ...
	I1002 20:53:25.221024  750191 addons.go:69] Setting default-storageclass=true in profile "functional-535239"
	I1002 20:53:25.221036  750191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-535239"
	I1002 20:53:25.221389  750191 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
	I1002 20:53:25.221883  750191 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
	I1002 20:53:25.224267  750191 out.go:179] * Verifying Kubernetes components...
	I1002 20:53:25.228343  750191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:53:25.251104  750191 addons.go:238] Setting addon default-storageclass=true in "functional-535239"
	W1002 20:53:25.251115  750191 addons.go:247] addon default-storageclass should already be in state true
	I1002 20:53:25.251136  750191 host.go:66] Checking if "functional-535239" exists ...
	I1002 20:53:25.251583  750191 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
	I1002 20:53:25.273023  750191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:53:25.275845  750191 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:53:25.275857  750191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:53:25.275965  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:53:25.301653  750191 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:53:25.301665  750191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:53:25.301730  750191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
	I1002 20:53:25.345355  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:53:25.357756  750191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
	I1002 20:53:25.582232  750191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:53:25.601732  750191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:53:25.604741  750191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:53:25.866766  750191 node_ready.go:35] waiting up to 6m0s for node "functional-535239" to be "Ready" ...
	I1002 20:53:25.870534  750191 node_ready.go:49] node "functional-535239" is "Ready"
	I1002 20:53:25.870552  750191 node_ready.go:38] duration metric: took 3.767788ms for node "functional-535239" to be "Ready" ...
	I1002 20:53:25.870567  750191 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:53:25.870626  750191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:53:26.409552  750191 api_server.go:72] duration metric: took 1.18975197s to wait for apiserver process to appear ...
	I1002 20:53:26.409563  750191 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:53:26.409579  750191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:53:26.413737  750191 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 20:53:26.416907  750191 addons.go:514] duration metric: took 1.196712894s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 20:53:26.419519  750191 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:53:26.420450  750191 api_server.go:141] control plane version: v1.34.1
	I1002 20:53:26.420463  750191 api_server.go:131] duration metric: took 10.895098ms to wait for apiserver health ...
	I1002 20:53:26.420469  750191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:53:26.423262  750191 system_pods.go:59] 7 kube-system pods found
	I1002 20:53:26.423279  750191 system_pods.go:61] "coredns-66bc5c9577-flhsr" [fb0d03c1-5814-435e-a45c-1cc06380b348] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:53:26.423286  750191 system_pods.go:61] "etcd-functional-535239" [47de3ad4-98da-4b1d-8251-802155975278] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:53:26.423295  750191 system_pods.go:61] "kube-apiserver-functional-535239" [5b29bca3-f7a6-4ef8-9a3c-c5c99ace8d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:53:26.423301  750191 system_pods.go:61] "kube-controller-manager-functional-535239" [3857bac9-b342-4545-8e11-c5fcb40a8dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:53:26.423305  750191 system_pods.go:61] "kube-proxy-bmrx5" [c51722b6-a375-4e82-a8db-2654b3ed6200] Running
	I1002 20:53:26.423310  750191 system_pods.go:61] "kube-scheduler-functional-535239" [143296b6-e81c-4bf2-8ce9-b576920e690b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:53:26.423314  750191 system_pods.go:61] "storage-provisioner" [e65464e1-c943-4af2-a41f-40afeb087995] Running
	I1002 20:53:26.423318  750191 system_pods.go:74] duration metric: took 2.844865ms to wait for pod list to return data ...
	I1002 20:53:26.423324  750191 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:53:26.425968  750191 default_sa.go:45] found service account: "default"
	I1002 20:53:26.425980  750191 default_sa.go:55] duration metric: took 2.652166ms for default service account to be created ...
	I1002 20:53:26.425988  750191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:53:26.429194  750191 system_pods.go:86] 7 kube-system pods found
	I1002 20:53:26.429212  750191 system_pods.go:89] "coredns-66bc5c9577-flhsr" [fb0d03c1-5814-435e-a45c-1cc06380b348] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:53:26.429220  750191 system_pods.go:89] "etcd-functional-535239" [47de3ad4-98da-4b1d-8251-802155975278] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:53:26.429227  750191 system_pods.go:89] "kube-apiserver-functional-535239" [5b29bca3-f7a6-4ef8-9a3c-c5c99ace8d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:53:26.429233  750191 system_pods.go:89] "kube-controller-manager-functional-535239" [3857bac9-b342-4545-8e11-c5fcb40a8dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:53:26.429237  750191 system_pods.go:89] "kube-proxy-bmrx5" [c51722b6-a375-4e82-a8db-2654b3ed6200] Running
	I1002 20:53:26.429242  750191 system_pods.go:89] "kube-scheduler-functional-535239" [143296b6-e81c-4bf2-8ce9-b576920e690b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:53:26.429245  750191 system_pods.go:89] "storage-provisioner" [e65464e1-c943-4af2-a41f-40afeb087995] Running
	I1002 20:53:26.429251  750191 system_pods.go:126] duration metric: took 3.258745ms to wait for k8s-apps to be running ...
	I1002 20:53:26.429258  750191 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:53:26.429320  750191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:53:26.442875  750191 system_svc.go:56] duration metric: took 13.609015ms WaitForService to wait for kubelet
	I1002 20:53:26.442892  750191 kubeadm.go:586] duration metric: took 1.223097035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:53:26.442908  750191 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:53:26.445733  750191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:53:26.445748  750191 node_conditions.go:123] node cpu capacity is 2
	I1002 20:53:26.445757  750191 node_conditions.go:105] duration metric: took 2.845275ms to run NodePressure ...
	I1002 20:53:26.445769  750191 start.go:241] waiting for startup goroutines ...
	I1002 20:53:26.445775  750191 start.go:246] waiting for cluster config update ...
	I1002 20:53:26.445785  750191 start.go:255] writing updated cluster config ...
	I1002 20:53:26.446085  750191 ssh_runner.go:195] Run: rm -f paused
	I1002 20:53:26.449710  750191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:53:26.454256  750191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-flhsr" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:53:28.460330  750191 pod_ready.go:104] pod "coredns-66bc5c9577-flhsr" is not "Ready", error: <nil>
	I1002 20:53:28.960286  750191 pod_ready.go:94] pod "coredns-66bc5c9577-flhsr" is "Ready"
	I1002 20:53:28.960301  750191 pod_ready.go:86] duration metric: took 2.506033714s for pod "coredns-66bc5c9577-flhsr" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:28.962999  750191 pod_ready.go:83] waiting for pod "etcd-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:30.469218  750191 pod_ready.go:94] pod "etcd-functional-535239" is "Ready"
	I1002 20:53:30.469232  750191 pod_ready.go:86] duration metric: took 1.506220062s for pod "etcd-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:30.471814  750191 pod_ready.go:83] waiting for pod "kube-apiserver-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:31.477519  750191 pod_ready.go:94] pod "kube-apiserver-functional-535239" is "Ready"
	I1002 20:53:31.477533  750191 pod_ready.go:86] duration metric: took 1.005707664s for pod "kube-apiserver-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:31.479839  750191 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:53:33.484924  750191 pod_ready.go:104] pod "kube-controller-manager-functional-535239" is not "Ready", error: <nil>
	W1002 20:53:35.485628  750191 pod_ready.go:104] pod "kube-controller-manager-functional-535239" is not "Ready", error: <nil>
	I1002 20:53:35.986493  750191 pod_ready.go:94] pod "kube-controller-manager-functional-535239" is "Ready"
	I1002 20:53:35.986508  750191 pod_ready.go:86] duration metric: took 4.50665665s for pod "kube-controller-manager-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:35.988869  750191 pod_ready.go:83] waiting for pod "kube-proxy-bmrx5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:35.993327  750191 pod_ready.go:94] pod "kube-proxy-bmrx5" is "Ready"
	I1002 20:53:35.993340  750191 pod_ready.go:86] duration metric: took 4.458775ms for pod "kube-proxy-bmrx5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:35.995545  750191 pod_ready.go:83] waiting for pod "kube-scheduler-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:35.999947  750191 pod_ready.go:94] pod "kube-scheduler-functional-535239" is "Ready"
	I1002 20:53:35.999963  750191 pod_ready.go:86] duration metric: took 4.406261ms for pod "kube-scheduler-functional-535239" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:53:35.999974  750191 pod_ready.go:40] duration metric: took 9.550244093s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:53:36.068583  750191 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:53:36.071729  750191 out.go:179] * Done! kubectl is now configured to use "functional-535239" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 20:53:19 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:19Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"3ccd3e7cc316814c9ea420fad1e70d84a598e0985b64a465249fecf9cd27b2e0\". Proceed without further sandbox information."
	Oct 02 20:53:19 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/78b70206d47ab83758e50246d7561f694e0f886f7811e67889cae601b245fbb5/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 20:53:19 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc8fbb1916d60f313a5784cb74f3dc625ea479605f6b1f89e2f048043c8cbc93/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 20:53:23 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 02 20:53:24 functional-535239 dockerd[7025]: time="2025-10-02T20:53:24.897496993Z" level=info msg="ignoring event" container=23c591f8f731ae12c49f39c45b7ea7bce8c7fce5fb7cd9bbabcd4521a393b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:53:39 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21c3faf1d56e30a9b9c8d05cfa282e8578aa30227c7e0e1785f0e924133d40b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:53:39 functional-535239 dockerd[7025]: time="2025-10-02T20:53:39.611704197Z" level=error msg="Not continuing with pull after error" error="errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Oct 02 20:53:39 functional-535239 dockerd[7025]: time="2025-10-02T20:53:39.611756235Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Oct 02 20:53:42 functional-535239 dockerd[7025]: time="2025-10-02T20:53:42.643996817Z" level=info msg="ignoring event" container=21c3faf1d56e30a9b9c8d05cfa282e8578aa30227c7e0e1785f0e924133d40b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:53:49 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1943608a6004fb2cac89bfb8f4b32d639e5ce2af003aafc67b1881c8a8ae6f8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:53:50 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:50Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Downloaded newer image for kicbase/echo-server:latest"
	Oct 02 20:53:52 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2fc794bc6d630c3c713fe9d2c7608f20c122a4749792b180f38a774f67d4de0c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:53:52 functional-535239 dockerd[7025]: time="2025-10-02T20:53:52.951414971Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:53:52 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:53:52Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 02 20:54:05 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:54:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3bd0c1a74e3b1616a4edcd203dedc1d43644a2924e7e2a1511fc82d93d002c17/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:54:05 functional-535239 dockerd[7025]: time="2025-10-02T20:54:05.337027123Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:54:06 functional-535239 dockerd[7025]: time="2025-10-02T20:54:06.046884790Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:54:20 functional-535239 dockerd[7025]: time="2025-10-02T20:54:20.075159593Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:54:30 functional-535239 dockerd[7025]: time="2025-10-02T20:54:30.073002011Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:54:43 functional-535239 dockerd[7025]: time="2025-10-02T20:54:43.069831389Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:55:21 functional-535239 dockerd[7025]: time="2025-10-02T20:55:21.074196355Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:55:24 functional-535239 dockerd[7025]: time="2025-10-02T20:55:24.051137577Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:56:50 functional-535239 dockerd[7025]: time="2025-10-02T20:56:50.173675268Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:56:50 functional-535239 cri-dockerd[7772]: time="2025-10-02T20:56:50Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 02 20:56:56 functional-535239 dockerd[7025]: time="2025-10-02T20:56:56.052440331Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6bcf0a5c127bd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   4 minutes ago       Running             echo-server               0                   a1943608a6004       hello-node-75c85bcc94-nzng8                 default
	88eb627827a5b       ba04bb24b9575                                                                                 4 minutes ago       Running             storage-provisioner       3                   f45c3a7246ce2       storage-provisioner                         kube-system
	18514364c3f5d       138784d87c9c5                                                                                 4 minutes ago       Running             coredns                   2                   e2ee7241fdb7b       coredns-66bc5c9577-flhsr                    kube-system
	e91d7ad646c67       05baa95f5142d                                                                                 4 minutes ago       Running             kube-proxy                3                   2413c02c12fbd       kube-proxy-bmrx5                            kube-system
	e25c43ccdf4e3       43911e833d64d                                                                                 4 minutes ago       Running             kube-apiserver            0                   cc8fbb1916d60       kube-apiserver-functional-535239            kube-system
	ad29d8f68f353       7eb2c6ff0c5a7                                                                                 4 minutes ago       Running             kube-controller-manager   2                   78b70206d47ab       kube-controller-manager-functional-535239   kube-system
	7cd4763f35104       a1894772a478e                                                                                 4 minutes ago       Running             etcd                      2                   2587df64c0cfc       etcd-functional-535239                      kube-system
	653466d13f9c5       b5f57ec6b9867                                                                                 4 minutes ago       Running             kube-scheduler            3                   ca63a4d8b6392       kube-scheduler-functional-535239            kube-system
	187fd4e1e6097       05baa95f5142d                                                                                 4 minutes ago       Exited              kube-proxy                2                   81302a06478d5       kube-proxy-bmrx5                            kube-system
	7a3c81552efc6       b5f57ec6b9867                                                                                 4 minutes ago       Exited              kube-scheduler            2                   b8a6333fb24e1       kube-scheduler-functional-535239            kube-system
	622ea6d704c96       ba04bb24b9575                                                                                 5 minutes ago       Exited              storage-provisioner       2                   88bb88ebdacf1       storage-provisioner                         kube-system
	798fdb462df5b       138784d87c9c5                                                                                 5 minutes ago       Exited              coredns                   1                   f7036054826c1       coredns-66bc5c9577-flhsr                    kube-system
	aa21be7130bc0       a1894772a478e                                                                                 5 minutes ago       Exited              etcd                      1                   2ef6ea3b15ba5       etcd-functional-535239                      kube-system
	bb5924326d81d       7eb2c6ff0c5a7                                                                                 5 minutes ago       Exited              kube-controller-manager   1                   f4bfe603591ee       kube-controller-manager-functional-535239   kube-system
	
	
	==> coredns [18514364c3f5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54141 - 40596 "HINFO IN 1466766216576499033.4307511153640274082. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023174456s
	
	
	==> coredns [798fdb462df5] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58342 - 31472 "HINFO IN 1148319212355416395.5702211502949280260. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035875182s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-535239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-535239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-535239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_50_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:50:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-535239
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:57:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:54:24 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:54:24 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:54:24 +0000   Thu, 02 Oct 2025 20:50:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:54:24 +0000   Thu, 02 Oct 2025 20:50:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-535239
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b06b61a93cf498296353568717f7c62
	  System UUID:                5a18d05d-5bd3-4799-867c-92b5e4e37cc0
	  Boot ID:                    da6cbe7f-2b2e-4cba-8b8d-394577434cdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-nzng8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-flhsr                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m6s
	  kube-system                 etcd-functional-535239                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m12s
	  kube-system                 kube-apiserver-functional-535239             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-functional-535239    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-bmrx5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-functional-535239             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m4s                   kube-proxy       
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   Starting                 5m43s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 7m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m12s                  kubelet          Node functional-535239 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m12s                  kubelet          Node functional-535239 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m12s                  kubelet          Node functional-535239 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal   NodeReady                7m8s                   kubelet          Node functional-535239 status is now: NodeReady
	  Normal   RegisteredNode           7m7s                   node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	  Normal   RegisteredNode           5m41s                  node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	  Warning  ContainerGCFailed        5m12s (x2 over 6m12s)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 4m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    4m47s (x8 over 4m48s)  kubelet          Node functional-535239 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m47s (x7 over 4m48s)  kubelet          Node functional-535239 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  4m47s (x8 over 4m48s)  kubelet          Node functional-535239 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           4m40s                  node-controller  Node functional-535239 event: Registered Node functional-535239 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7cd4763f3510] <==
	{"level":"warn","ts":"2025-10-02T20:53:22.147427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.170208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.182861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.203236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.218149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.234663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.253547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.269269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.287015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.311985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.338929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.365508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.378031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.396375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.422820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.444837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.472807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.474427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.488843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.503131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.524124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.544446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.560279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.578324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:53:22.650474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52512","server-name":"","error":"EOF"}
	
	
	==> etcd [aa21be7130bc] <==
	{"level":"warn","ts":"2025-10-02T20:52:20.982625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.004884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.022861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.084677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.120556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.146218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:52:21.201672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53312","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:53:01.076004Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:53:01.076069Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-535239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T20:53:01.076192Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:53:01.076250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:53:08.078726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.079024Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T20:53:08.082931Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:53:08.083038Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084432Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084550Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:53:08.084601Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084738Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:53:08.084786Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:53:08.084825Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.087609Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T20:53:08.087868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:53:08.087907Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T20:53:08.087917Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-535239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:58:06 up  3:40,  0 user,  load average: 0.16, 0.88, 1.40
	Linux functional-535239 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e25c43ccdf4e] <==
	I1002 20:53:23.378440       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 20:53:23.378507       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 20:53:23.397867       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 20:53:23.427052       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:53:23.438148       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 20:53:23.438469       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 20:53:23.438672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:53:23.446352       1 aggregator.go:171] initial CRD sync complete...
	I1002 20:53:23.446382       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 20:53:23.446389       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 20:53:23.446395       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:53:23.446775       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 20:53:23.473386       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:53:23.913742       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:53:24.183139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:53:25.061865       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:53:25.105287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:53:25.145660       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:53:25.156606       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:53:26.971366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 20:53:27.022350       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:53:27.076329       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:53:38.942910       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.108.86"}
	I1002 20:53:49.222100       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.175.100"}
	I1002 20:53:52.116583       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.60.50"}
	
	
	==> kube-controller-manager [ad29d8f68f35] <==
	I1002 20:53:26.671820       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:53:26.683653       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:53:26.684091       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:53:26.684229       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:53:26.688224       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 20:53:26.689685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:53:26.698326       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:53:26.698374       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:53:26.698426       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:53:26.702779       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 20:53:26.708984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:53:26.709059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:53:26.710213       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:53:26.710293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:53:26.710342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:53:26.713102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:53:26.713131       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:53:26.713140       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:53:26.713209       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:53:26.713525       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:53:26.714383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:53:26.714694       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:53:26.722354       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:53:26.724019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:53:26.744701       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [bb5924326d81] <==
	I1002 20:52:25.552261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:52:25.552410       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-535239"
	I1002 20:52:25.552506       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:52:25.554800       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 20:52:25.557550       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:52:25.564041       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:52:25.566890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:52:25.569567       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 20:52:25.573526       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 20:52:25.577887       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 20:52:25.581215       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:52:25.583602       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:52:25.583851       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 20:52:25.583978       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:52:25.584108       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:52:25.584251       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:52:25.584377       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:52:25.584380       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 20:52:25.584602       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:52:25.584939       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:52:25.585356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:52:25.586423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:52:25.587653       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:52:25.592926       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:52:25.598403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [187fd4e1e609] <==
	
	
	==> kube-proxy [e91d7ad646c6] <==
	I1002 20:53:24.664468       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:53:24.763000       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:53:24.864178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:53:24.864261       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:53:24.864389       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:53:24.908668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:53:24.908727       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:53:24.926515       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:53:24.927129       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:53:24.927145       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:53:24.932266       1 config.go:200] "Starting service config controller"
	I1002 20:53:24.932286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:53:24.932310       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:53:24.932314       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:53:24.932324       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:53:24.932328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:53:24.934894       1 config.go:309] "Starting node config controller"
	I1002 20:53:24.934915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:53:24.934921       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:53:25.033228       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:53:25.033269       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:53:25.033240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [653466d13f9c] <==
	I1002 20:53:20.926449       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:53:23.289778       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:53:23.289820       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:53:23.289830       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:53:23.291621       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:53:23.332624       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:53:23.332870       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:53:23.354050       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:53:23.354906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:53:23.355038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:53:23.357656       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:53:23.457759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7a3c81552efc] <==
	I1002 20:53:14.775559       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 02 20:56:03 functional-535239 kubelet[9199]: E1002 20:56:03.844395    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:56:10 functional-535239 kubelet[9199]: E1002 20:56:10.849496    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:56:17 functional-535239 kubelet[9199]: E1002 20:56:17.843716    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:56:25 functional-535239 kubelet[9199]: E1002 20:56:25.845766    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:56:29 functional-535239 kubelet[9199]: E1002 20:56:29.844132    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:56:36 functional-535239 kubelet[9199]: E1002 20:56:36.849122    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:56:42 functional-535239 kubelet[9199]: E1002 20:56:42.844640    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:56:50 functional-535239 kubelet[9199]: E1002 20:56:50.176519    9199 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:56:50 functional-535239 kubelet[9199]: E1002 20:56:50.176623    9199 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:56:50 functional-535239 kubelet[9199]: E1002 20:56:50.176694    9199 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(e95a9af5-43b5-4878-b771-26b3080275ba): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:56:50 functional-535239 kubelet[9199]: E1002 20:56:50.176727    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:56:56 functional-535239 kubelet[9199]: E1002 20:56:56.056072    9199 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:56:56 functional-535239 kubelet[9199]: E1002 20:56:56.056127    9199 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:56:56 functional-535239 kubelet[9199]: E1002 20:56:56.056206    9199 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(77879ce4-3efb-4271-aec1-7fa1f7e941a0): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:56:56 functional-535239 kubelet[9199]: E1002 20:56:56.056238    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:57:04 functional-535239 kubelet[9199]: E1002 20:57:04.846349    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:57:10 functional-535239 kubelet[9199]: E1002 20:57:10.844425    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:57:17 functional-535239 kubelet[9199]: E1002 20:57:17.846887    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:57:22 functional-535239 kubelet[9199]: E1002 20:57:22.843944    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:57:30 functional-535239 kubelet[9199]: E1002 20:57:30.847173    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:57:34 functional-535239 kubelet[9199]: E1002 20:57:34.844058    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:57:41 functional-535239 kubelet[9199]: E1002 20:57:41.851895    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:57:47 functional-535239 kubelet[9199]: E1002 20:57:47.844208    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	Oct 02 20:57:54 functional-535239 kubelet[9199]: E1002 20:57:54.852393    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e95a9af5-43b5-4878-b771-26b3080275ba"
	Oct 02 20:57:59 functional-535239 kubelet[9199]: E1002 20:57:59.844409    9199 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="77879ce4-3efb-4271-aec1-7fa1f7e941a0"
	
	
	==> storage-provisioner [622ea6d704c9] <==
	I1002 20:52:34.830895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:52:34.847760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:52:34.848081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 20:52:34.850806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:38.305518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:42.566358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:46.165219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:49.218836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.240986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.246186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:52:52.246418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:52:52.246602       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a!
	I1002 20:52:52.247484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7083b156-288c-4c5f-bb0f-9016da234852", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a became leader
	W1002 20:52:52.253374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:52.256450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:52:52.347725       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-535239_2b3d1e7c-7fa1-4cfa-a597-de373c27c43a!
	W1002 20:52:54.259164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:54.263683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:56.267195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:56.274152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:58.276880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:52:58.281986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:00.286393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:00.295461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88eb627827a5] <==
	W1002 20:57:41.138014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:43.141602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:43.146237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:45.152825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:45.160824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:47.164197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:47.168596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:49.172112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:49.176582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:51.180101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:51.187022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:53.190025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:53.194465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:55.197835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:55.204438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:57.207561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:57.212229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:59.215727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:57:59.222536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:01.225296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:01.231910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:03.234973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:03.239237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:05.243429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:05.248035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535239 -n functional-535239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-535239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-535239 describe pod nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-535239 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535239/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:53:52 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8c7n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8c7n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m15s                 default-scheduler  Successfully assigned default/nginx-svc to functional-535239
	  Warning  Failed     2m46s (x3 over 4m1s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    78s (x5 over 4m15s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     77s (x2 over 4m15s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     77s (x5 over 4m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     26s (x15 over 4m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x17 over 4m15s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535239/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:54:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnsv7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jnsv7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  4m2s                default-scheduler  Successfully assigned default/sp-pod to functional-535239
	  Normal   Pulling    72s (x5 over 4m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     71s (x5 over 4m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     71s (x5 over 4m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x15 over 4m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x15 over 4m1s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (248.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-535239 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e95a9af5-43b5-4878-b771-26b3080275ba] Pending
helpers_test.go:352: "nginx-svc" [e95a9af5-43b5-4878-b771-26b3080275ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535239 -n functional-535239
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-02 20:57:52.473741807 +0000 UTC m=+1809.641919417
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-535239 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-535239 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-535239/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:53:52 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8c7n (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r8c7n:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/nginx-svc to functional-535239
Warning  Failed     2m31s (x3 over 3m46s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    63s (x5 over 4m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     62s (x2 over 4m)       kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     62s (x5 over 4m)       kubelet            Error: ErrImagePull
Normal   BackOff    11s (x15 over 4m)      kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x15 over 4m)      kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-535239 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-535239 logs nginx-svc -n default: exit status 1 (99.945401ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-535239 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 20:57:52.661231  703895 retry.go:31] will retry after 2.015066194s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:57:54.676966  703895 retry.go:31] will retry after 6.188796719s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:58:00.866597  703895 retry.go:31] will retry after 8.613064976s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-535239 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.107.60.50   10.107.60.50   80:31562/TCP   5m28s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.80s)

                                                
                                    

Test pass (312/346)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.61
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 12.39
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 85.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 165.24
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.97
35 TestAddons/parallel/Registry 15.9
36 TestAddons/parallel/RegistryCreds 0.72
38 TestAddons/parallel/InspektorGadget 6.38
39 TestAddons/parallel/MetricsServer 5.79
42 TestAddons/parallel/Headlamp 17.68
43 TestAddons/parallel/CloudSpanner 6.58
45 TestAddons/parallel/NvidiaDevicePlugin 5.7
46 TestAddons/parallel/Yakd 11.73
48 TestAddons/StoppedEnableDisable 11.23
49 TestCertOptions 41.83
50 TestCertExpiration 271.03
51 TestDockerFlags 47.23
52 TestForceSystemdFlag 48.85
53 TestForceSystemdEnv 49.29
59 TestErrorSpam/setup 34.51
60 TestErrorSpam/start 0.85
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 1.48
63 TestErrorSpam/unpause 1.65
64 TestErrorSpam/stop 11.11
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 85.79
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.84
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
76 TestFunctional/serial/CacheCmd/cache/add_local 0.99
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 54.02
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.23
87 TestFunctional/serial/LogsFileCmd 1.25
88 TestFunctional/serial/InvalidService 4.23
90 TestFunctional/parallel/ConfigCmd 0.51
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.07
98 TestFunctional/parallel/ServiceCmdConnect 7.6
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.59
103 TestFunctional/parallel/CpCmd 2.01
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.12
110 TestFunctional/parallel/NodeLabels 0.15
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
114 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 1
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.49
122 TestFunctional/parallel/ImageCommands/Setup 0.66
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
124 TestFunctional/parallel/DockerEnv/bash 1.38
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.33
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/ServiceCmd/List 0.35
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
143 TestFunctional/parallel/ServiceCmd/Format 0.38
144 TestFunctional/parallel/ServiceCmd/URL 0.38
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
147 TestFunctional/parallel/ProfileCmd/profile_list 0.43
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
149 TestFunctional/parallel/MountCmd/any-port 7.74
150 TestFunctional/parallel/MountCmd/specific-port 2.16
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 157.24
164 TestMultiControlPlane/serial/DeployApp 7.87
165 TestMultiControlPlane/serial/PingHostFromPods 1.79
166 TestMultiControlPlane/serial/AddWorkerNode 36.28
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
169 TestMultiControlPlane/serial/CopyFile 20.41
170 TestMultiControlPlane/serial/StopSecondaryNode 11.92
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 46.9
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 189.87
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.42
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
177 TestMultiControlPlane/serial/StopCluster 32.77
178 TestMultiControlPlane/serial/RestartCluster 103.58
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
180 TestMultiControlPlane/serial/AddSecondaryNode 61.87
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.15
184 TestImageBuild/serial/Setup 31.59
185 TestImageBuild/serial/NormalBuild 1.84
186 TestImageBuild/serial/BuildWithBuildArg 1.06
187 TestImageBuild/serial/BuildWithDockerIgnore 0.9
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.03
192 TestJSONOutput/start/Command 80.94
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.63
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.56
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 11.05
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.25
217 TestKicCustomNetwork/create_custom_network 36.96
218 TestKicCustomNetwork/use_default_bridge_network 35.61
219 TestKicExistingNetwork 34.18
220 TestKicCustomSubnet 35.38
221 TestKicStaticIP 37.26
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 81.56
226 TestMountStart/serial/StartWithMountFirst 9.3
227 TestMountStart/serial/VerifyMountFirst 0.27
228 TestMountStart/serial/StartWithMountSecond 8.6
229 TestMountStart/serial/VerifyMountSecond 0.26
230 TestMountStart/serial/DeleteFirst 1.49
231 TestMountStart/serial/VerifyMountPostDelete 0.26
232 TestMountStart/serial/Stop 1.22
233 TestMountStart/serial/RestartStopped 8.6
234 TestMountStart/serial/VerifyMountPostStop 0.27
237 TestMultiNode/serial/FreshStart2Nodes 92.91
238 TestMultiNode/serial/DeployApp2Nodes 5.58
239 TestMultiNode/serial/PingHostFrom2Pods 1.01
240 TestMultiNode/serial/AddNode 35.24
241 TestMultiNode/serial/MultiNodeLabels 0.09
242 TestMultiNode/serial/ProfileList 0.71
243 TestMultiNode/serial/CopyFile 10.31
244 TestMultiNode/serial/StopNode 2.43
245 TestMultiNode/serial/StartAfterStop 9.28
246 TestMultiNode/serial/RestartKeepsNodes 78.86
247 TestMultiNode/serial/DeleteNode 5.67
248 TestMultiNode/serial/StopMultiNode 21.84
249 TestMultiNode/serial/RestartMultiNode 53.82
250 TestMultiNode/serial/ValidateNameConflict 40.29
255 TestPreload 126.61
257 TestScheduledStopUnix 107.79
258 TestSkaffold 149.04
260 TestInsufficientStorage 13.65
261 TestRunningBinaryUpgrade 88.53
263 TestKubernetesUpgrade 388.64
264 TestMissingContainerUpgrade 106.36
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
267 TestNoKubernetes/serial/StartWithK8s 43.08
268 TestNoKubernetes/serial/StartWithStopK8s 19.81
269 TestNoKubernetes/serial/Start 10.23
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
271 TestNoKubernetes/serial/ProfileList 1.1
272 TestNoKubernetes/serial/Stop 1.23
273 TestNoKubernetes/serial/StartNoArgs 8.29
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
286 TestStoppedBinaryUpgrade/Setup 7.61
287 TestStoppedBinaryUpgrade/Upgrade 74.44
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
290 TestPause/serial/Start 76.73
291 TestPause/serial/SecondStartNoReconfiguration 50.12
292 TestPause/serial/Pause 0.66
293 TestPause/serial/VerifyStatus 0.31
294 TestPause/serial/Unpause 0.59
295 TestPause/serial/PauseAgain 1.1
296 TestPause/serial/DeletePaused 2.19
297 TestPause/serial/VerifyDeletedResources 0.4
305 TestNetworkPlugins/group/auto/Start 75.07
306 TestNetworkPlugins/group/auto/KubeletFlags 0.31
307 TestNetworkPlugins/group/auto/NetCatPod 11.31
308 TestNetworkPlugins/group/auto/DNS 0.38
309 TestNetworkPlugins/group/auto/Localhost 0.16
310 TestNetworkPlugins/group/auto/HairPin 0.17
311 TestNetworkPlugins/group/kindnet/Start 64.18
312 TestNetworkPlugins/group/calico/Start 69.86
313 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
315 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
316 TestNetworkPlugins/group/kindnet/DNS 0.29
317 TestNetworkPlugins/group/kindnet/Localhost 0.26
318 TestNetworkPlugins/group/kindnet/HairPin 0.23
319 TestNetworkPlugins/group/custom-flannel/Start 57.88
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.4
322 TestNetworkPlugins/group/calico/NetCatPod 11.37
323 TestNetworkPlugins/group/calico/DNS 0.24
324 TestNetworkPlugins/group/calico/Localhost 0.22
325 TestNetworkPlugins/group/calico/HairPin 0.22
326 TestNetworkPlugins/group/false/Start 80.46
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.37
329 TestNetworkPlugins/group/custom-flannel/DNS 0.25
330 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
331 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
332 TestNetworkPlugins/group/enable-default-cni/Start 75.27
333 TestNetworkPlugins/group/false/KubeletFlags 0.45
334 TestNetworkPlugins/group/false/NetCatPod 10.43
335 TestNetworkPlugins/group/false/DNS 0.19
336 TestNetworkPlugins/group/false/Localhost 0.16
337 TestNetworkPlugins/group/false/HairPin 0.21
338 TestNetworkPlugins/group/flannel/Start 58.12
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
344 TestNetworkPlugins/group/bridge/Start 74.39
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
347 TestNetworkPlugins/group/flannel/NetCatPod 12.32
348 TestNetworkPlugins/group/flannel/DNS 0.2
349 TestNetworkPlugins/group/flannel/Localhost 0.23
350 TestNetworkPlugins/group/flannel/HairPin 0.19
351 TestNetworkPlugins/group/kubenet/Start 81.18
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
353 TestNetworkPlugins/group/bridge/NetCatPod 10.38
354 TestNetworkPlugins/group/bridge/DNS 0.27
355 TestNetworkPlugins/group/bridge/Localhost 0.2
356 TestNetworkPlugins/group/bridge/HairPin 0.17
358 TestStartStop/group/old-k8s-version/serial/FirstStart 91.93
359 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
360 TestNetworkPlugins/group/kubenet/NetCatPod 11.41
361 TestNetworkPlugins/group/kubenet/DNS 0.24
362 TestNetworkPlugins/group/kubenet/Localhost 0.17
363 TestNetworkPlugins/group/kubenet/HairPin 0.17
365 TestStartStop/group/no-preload/serial/FirstStart 86.79
366 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
367 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
368 TestStartStop/group/old-k8s-version/serial/Stop 11.33
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/old-k8s-version/serial/SecondStart 29.34
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.02
372 TestStartStop/group/no-preload/serial/DeployApp 10.34
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.16
374 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
375 TestStartStop/group/no-preload/serial/Stop 12.06
376 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
377 TestStartStop/group/old-k8s-version/serial/Pause 3.19
379 TestStartStop/group/embed-certs/serial/FirstStart 82.51
380 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
381 TestStartStop/group/no-preload/serial/SecondStart 60.58
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
385 TestStartStop/group/no-preload/serial/Pause 3.11
386 TestStartStop/group/embed-certs/serial/DeployApp 9.37
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.08
389 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
390 TestStartStop/group/embed-certs/serial/Stop 11.12
391 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
392 TestStartStop/group/embed-certs/serial/SecondStart 56.78
393 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
396 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
397 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
398 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
399 TestStartStop/group/embed-certs/serial/Pause 3.16
401 TestStartStop/group/newest-cni/serial/FirstStart 49.25
402 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
403 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.95
404 TestStartStop/group/newest-cni/serial/DeployApp 0
405 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
406 TestStartStop/group/newest-cni/serial/Stop 11.1
407 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
408 TestStartStop/group/newest-cni/serial/SecondStart 22.39
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.15
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.89
413 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
414 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
416 TestStartStop/group/newest-cni/serial/Pause 4.77
x
+
TestDownloadOnly/v1.28.0/json-events (12.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.606243144s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 20:27:55.485262  703895 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1002 20:27:55.485345  703895 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-625181
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-625181: exit status 85 (89.865505ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-625181 │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:27:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:27:42.929026  703903 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:42.929234  703903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:42.929264  703903 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:42.929287  703903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:42.929616  703903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	W1002 20:27:42.929794  703903 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21682-702037/.minikube/config/config.json: open /home/jenkins/minikube-integration/21682-702037/.minikube/config/config.json: no such file or directory
	I1002 20:27:42.930272  703903 out.go:368] Setting JSON to true
	I1002 20:27:42.931208  703903 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11390,"bootTime":1759425473,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:27:42.931318  703903 start.go:140] virtualization:  
	I1002 20:27:42.935446  703903 out.go:99] [download-only-625181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 20:27:42.935650  703903 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 20:27:42.935791  703903 notify.go:220] Checking for updates...
	I1002 20:27:42.939479  703903 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:27:42.942976  703903 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:27:42.945932  703903 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:27:42.948841  703903 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:27:42.951850  703903 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:27:42.957316  703903 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:27:42.957608  703903 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:27:42.994825  703903 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:27:42.994962  703903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:27:43.055670  703903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 20:27:43.04559652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:27:43.055785  703903 docker.go:318] overlay module found
	I1002 20:27:43.058842  703903 out.go:99] Using the docker driver based on user configuration
	I1002 20:27:43.058880  703903 start.go:304] selected driver: docker
	I1002 20:27:43.058910  703903 start.go:924] validating driver "docker" against <nil>
	I1002 20:27:43.059028  703903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:27:43.121739  703903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 20:27:43.112603028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:27:43.121905  703903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:27:43.122202  703903 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:27:43.122360  703903 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:27:43.125583  703903 out.go:171] Using Docker driver with root privileges
	I1002 20:27:43.128526  703903 cni.go:84] Creating CNI manager for ""
	I1002 20:27:43.128611  703903 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:27:43.128626  703903 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:27:43.128710  703903 start.go:348] cluster config:
	{Name:download-only-625181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-625181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:27:43.131733  703903 out.go:99] Starting "download-only-625181" primary control-plane node in "download-only-625181" cluster
	I1002 20:27:43.131779  703903 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:27:43.134687  703903 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:27:43.134731  703903 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 20:27:43.134889  703903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:27:43.150959  703903 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:27:43.151158  703903 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:27:43.151271  703903 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:27:43.188148  703903 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 20:27:43.188180  703903 cache.go:58] Caching tarball of preloaded images
	I1002 20:27:43.188341  703903 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 20:27:43.191592  703903 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 20:27:43.191621  703903 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1002 20:27:43.277052  703903 preload.go:290] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1002 20:27:43.277178  703903 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 20:27:48.472061  703903 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-625181 host does not exist
	  To start a cluster, run: "minikube start -p download-only-625181"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-625181
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.389881928s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 20:28:08.324765  703895 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1002 20:28:08.324804  703895 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-545661
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-545661: exit status 85 (90.162419ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-625181 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-625181 │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ delete  │ -p download-only-625181                                                                                                                                                       │ download-only-625181 │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ start   │ -o=json --download-only -p download-only-545661 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-545661 │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:27:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:27:55.978961  704102 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:55.979395  704102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:55.979432  704102 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:55.979454  704102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:55.979720  704102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:27:55.980166  704102 out.go:368] Setting JSON to true
	I1002 20:27:55.981050  704102 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11403,"bootTime":1759425473,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:27:55.981151  704102 start.go:140] virtualization:  
	I1002 20:27:55.984557  704102 out.go:99] [download-only-545661] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:27:55.984837  704102 notify.go:220] Checking for updates...
	I1002 20:27:55.988597  704102 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:27:55.991568  704102 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:27:55.994484  704102 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:27:55.997663  704102 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:27:56.000453  704102 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:27:56.007386  704102 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:27:56.007708  704102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:27:56.039351  704102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:27:56.039481  704102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:27:56.096297  704102 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:27:56.086831712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:27:56.096403  704102 docker.go:318] overlay module found
	I1002 20:27:56.099439  704102 out.go:99] Using the docker driver based on user configuration
	I1002 20:27:56.099482  704102 start.go:304] selected driver: docker
	I1002 20:27:56.099494  704102 start.go:924] validating driver "docker" against <nil>
	I1002 20:27:56.099605  704102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:27:56.151797  704102 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 20:27:56.143149363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:27:56.151965  704102 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:27:56.152246  704102 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:27:56.152407  704102 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:27:56.155496  704102 out.go:171] Using Docker driver with root privileges
	I1002 20:27:56.158352  704102 cni.go:84] Creating CNI manager for ""
	I1002 20:27:56.158437  704102 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:27:56.158451  704102 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:27:56.158533  704102 start.go:348] cluster config:
	{Name:download-only-545661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-545661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:27:56.161407  704102 out.go:99] Starting "download-only-545661" primary control-plane node in "download-only-545661" cluster
	I1002 20:27:56.161523  704102 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 20:27:56.164400  704102 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:27:56.164437  704102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:27:56.164620  704102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:27:56.181277  704102 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:27:56.181414  704102 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:27:56.181452  704102 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:27:56.181461  704102 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:27:56.181470  704102 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:27:56.230425  704102 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 20:27:56.230451  704102 cache.go:58] Caching tarball of preloaded images
	I1002 20:27:56.231277  704102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 20:27:56.234460  704102 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 20:27:56.234489  704102 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1002 20:27:56.315398  704102 preload.go:290] Got checksum from GCS API "0ed426d75a878e5f4b25fef8ce404e82"
	I1002 20:27:56.315453  704102 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4?checksum=md5:0ed426d75a878e5f4b25fef8ce404e82 -> /home/jenkins/minikube-integration/21682-702037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-545661 host does not exist
	  To start a cluster, run: "minikube start -p download-only-545661"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-545661
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 20:28:09.495548  703895 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-067581 --alsologtostderr --binary-mirror http://127.0.0.1:39571 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-067581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-067581
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (85.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-119324 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-119324 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m22.915684006s)
helpers_test.go:175: Cleaning up "offline-docker-119324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-119324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-119324: (2.379598342s)
--- PASS: TestOffline (85.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-991638
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-991638: exit status 85 (77.293432ms)

                                                
                                                
-- stdout --
	* Profile "addons-991638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-991638
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-991638: exit status 85 (71.479033ms)

                                                
                                                
-- stdout --
	* Profile "addons-991638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (165.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-991638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m45.234282143s)
--- PASS: TestAddons/Setup (165.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-991638 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-991638 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.97s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-991638 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-991638 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a7b74a7e-b386-4a2b-aa59-b9ead96cb40d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a7b74a7e-b386-4a2b-aa59-b9ead96cb40d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003622448s
addons_test.go:694: (dbg) Run:  kubectl --context addons-991638 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-991638 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-991638 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-991638 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.97s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.502372ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-6774f" [7e80f21f-b15e-4cdb-8ea6-acf4d9abae41] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003498776s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-97fzv" [a20a6590-a956-4737-ac00-ac04902b0f75] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003566014s
addons_test.go:392: (dbg) Run:  kubectl --context addons-991638 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-991638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-991638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.941658021s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 ip
2025/10/02 20:35:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.90s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.570663ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-991638
addons_test.go:332: (dbg) Run:  kubectl --context addons-991638 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gq5qh" [521be037-747d-49de-80f9-7a4e478d142c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003280846s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.644728ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4vr85" [f34ac532-4ae3-4ba7-a7fb-9f87c37f5519] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003574642s
addons_test.go:463: (dbg) Run:  kubectl --context addons-991638 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-991638 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-bcffz" [7b5aee0b-5b06-4cbc-a3da-ef2302b6b34f] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-bcffz" [7b5aee0b-5b06-4cbc-a3da-ef2302b6b34f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-bcffz" [7b5aee0b-5b06-4cbc-a3da-ef2302b6b34f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00378813s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable headlamp --alsologtostderr -v=1: (5.764947691s)
--- PASS: TestAddons/parallel/Headlamp (17.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-jcxrq" [e2887bde-9ce5-4b33-b701-6fccd84c4cc7] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004495415s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xtwll" [49e6d9ab-4a71-41bc-b81f-3fc6b78de696] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003898889s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lmbwf" [664aa9f9-c247-4cbe-abff-b230bcd028b3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004006465s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-991638 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-991638 addons disable yakd --alsologtostderr -v=1: (5.729859694s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-991638
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-991638: (10.940072864s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-991638
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-991638
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-991638
--- PASS: TestAddons/StoppedEnableDisable (11.23s)

                                                
                                    
x
+
TestCertOptions (41.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-599655 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-599655 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (38.926654411s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-599655 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-599655 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-599655 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-599655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-599655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-599655: (2.142339286s)
--- PASS: TestCertOptions (41.83s)

                                                
                                    
x
+
TestCertExpiration (271.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-990948 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-990948 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.364899793s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-990948 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-990948 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (48.145096137s)
helpers_test.go:175: Cleaning up "cert-expiration-990948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-990948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-990948: (2.523780464s)
--- PASS: TestCertExpiration (271.03s)

                                                
                                    
x
+
TestDockerFlags (47.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-359831 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-359831 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.212434141s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-359831 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-359831 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-359831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-359831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-359831: (2.2971794s)
--- PASS: TestDockerFlags (47.23s)

                                                
                                    
x
+
TestForceSystemdFlag (48.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-755881 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 21:36:52.295680  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-755881 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.35079536s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-755881 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-755881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-755881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-755881: (2.892892722s)
--- PASS: TestForceSystemdFlag (48.85s)

                                                
                                    
x
+
TestForceSystemdEnv (49.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-925776 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-925776 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.160385011s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-925776 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-925776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-925776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-925776: (2.679870646s)
--- PASS: TestForceSystemdEnv (49.29s)

                                                
                                    
x
+
TestErrorSpam/setup (34.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-067290 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-067290 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-067290 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-067290 --driver=docker  --container-runtime=docker: (34.514644313s)
--- PASS: TestErrorSpam/setup (34.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (11.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 stop: (10.898323394s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-067290 --log_dir /tmp/nospam-067290 stop
--- PASS: TestErrorSpam/stop (11.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21682-702037/.minikube/files/etc/test/nested/copy/703895/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1002 20:50:55.465858  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.472352  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.483839  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.505197  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.546569  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.627959  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:55.789373  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:56.111132  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:56.753152  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:58.034518  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:51:00.595932  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:51:05.717346  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:51:15.959566  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:51:36.441200  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-535239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m25.788125997s)
--- PASS: TestFunctional/serial/StartWithProxy (85.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 20:51:44.923453  703895 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --alsologtostderr -v=8
E1002 20:52:17.403506  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-535239 --alsologtostderr -v=8: (50.8395015s)
functional_test.go:678: soft start took 50.842245711s for "functional-535239" cluster.
I1002 20:52:35.763367  703895 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (50.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-535239 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-535239 /tmp/TestFunctionalserialCacheCmdcacheadd_local4030384281/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache add minikube-local-cache-test:functional-535239
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache delete minikube-local-cache-test:functional-535239
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-535239
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.57369ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 kubectl -- --context functional-535239 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-535239 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-535239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.020437851s)
functional_test.go:776: restart took 54.020549187s for "functional-535239" cluster.
I1002 20:53:36.088522  703895 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (54.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-535239 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-535239 logs: (1.225807017s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 logs --file /tmp/TestFunctionalserialLogsFileCmd3510831247/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-535239 logs --file /tmp/TestFunctionalserialLogsFileCmd3510831247/001/logs.txt: (1.251556965s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-535239 apply -f testdata/invalidsvc.yaml
E1002 20:53:39.324813  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-535239
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-535239: exit status 115 (512.712987ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30740 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-535239 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 config get cpus: exit status 14 (84.651485ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 config get cpus: exit status 14 (85.507889ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (200.263188ms)

                                                
                                                
-- stdout --
	* [functional-535239] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:58:29.165718  761805 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:58:29.165846  761805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.165858  761805 out.go:374] Setting ErrFile to fd 2...
	I1002 20:58:29.165863  761805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.166160  761805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:58:29.166547  761805 out.go:368] Setting JSON to false
	I1002 20:58:29.167721  761805 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13236,"bootTime":1759425473,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:58:29.167797  761805 start.go:140] virtualization:  
	I1002 20:58:29.171049  761805 out.go:179] * [functional-535239] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:58:29.174535  761805 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:58:29.174667  761805 notify.go:220] Checking for updates...
	I1002 20:58:29.180494  761805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:58:29.183531  761805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:58:29.186179  761805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:58:29.188864  761805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:58:29.191588  761805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:58:29.194717  761805 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:58:29.195265  761805 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:58:29.219113  761805 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:58:29.219241  761805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:58:29.278418  761805 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:58:29.268644844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:58:29.278527  761805 docker.go:318] overlay module found
	I1002 20:58:29.285526  761805 out.go:179] * Using the docker driver based on existing profile
	I1002 20:58:29.289252  761805 start.go:304] selected driver: docker
	I1002 20:58:29.289293  761805 start.go:924] validating driver "docker" against &{Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:58:29.289391  761805 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:58:29.292969  761805 out.go:203] 
	W1002 20:58:29.295913  761805 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 20:58:29.298671  761805 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-535239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (206.630074ms)

                                                
                                                
-- stdout --
	* [functional-535239] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:58:29.610822  761926 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:58:29.611036  761926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.611064  761926 out.go:374] Setting ErrFile to fd 2...
	I1002 20:58:29.611085  761926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:29.612131  761926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 20:58:29.612564  761926 out.go:368] Setting JSON to false
	I1002 20:58:29.613660  761926 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13236,"bootTime":1759425473,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 20:58:29.613801  761926 start.go:140] virtualization:  
	I1002 20:58:29.616956  761926 out.go:179] * [functional-535239] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 20:58:29.620671  761926 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:58:29.620744  761926 notify.go:220] Checking for updates...
	I1002 20:58:29.626447  761926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:58:29.629379  761926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	I1002 20:58:29.632162  761926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	I1002 20:58:29.634899  761926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:58:29.637680  761926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:58:29.641045  761926 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:58:29.641798  761926 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:58:29.666995  761926 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:58:29.667120  761926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:58:29.726501  761926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:58:29.717160061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:58:29.726614  761926 docker.go:318] overlay module found
	I1002 20:58:29.729870  761926 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 20:58:29.732816  761926 start.go:304] selected driver: docker
	I1002 20:58:29.732833  761926 start.go:924] validating driver "docker" against &{Name:functional-535239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-535239 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:58:29.732940  761926 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:58:29.736572  761926 out.go:203] 
	W1002 20:58:29.739401  761926 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:58:29.742197  761926 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-535239 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-535239 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4lt6r" [63f21048-f66d-483f-904e-beadaa3a2886] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1002 20:58:09.480366  703895 retry.go:31] will retry after 13.931119718s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:352: "hello-node-connect-7d85dfc575-4lt6r" [63f21048-f66d-483f-904e-beadaa3a2886] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003485816s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32625
functional_test.go:1680: http://192.168.49.2:32625: success! body:
Request served by hello-node-connect-7d85dfc575-4lt6r

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32625
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh -n functional-535239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cp functional-535239:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd437154902/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh -n functional-535239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh -n functional-535239 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/703895/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /etc/test/nested/copy/703895/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/703895.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /etc/ssl/certs/703895.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/703895.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /usr/share/ca-certificates/703895.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7038952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /etc/ssl/certs/7038952.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7038952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /usr/share/ca-certificates/7038952.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-535239 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh "sudo systemctl is-active crio": exit status 1 (366.111411ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535239 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-535239
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-535239
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535239 image ls --format short --alsologtostderr:
I1002 20:59:22.115550  762782 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:22.115676  762782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.115686  762782 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:22.115691  762782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.115944  762782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:59:22.116532  762782 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.116661  762782 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.117104  762782 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:59:22.134038  762782 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:22.134113  762782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:59:22.151135  762782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:59:22.252063  762782 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535239 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ 43911e833d64d │ 83.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ 7eb2c6ff0c5a7 │ 71.5MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ 05baa95f5142d │ 74.7MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ localhost/my-image                          │ functional-535239 │ 7393e1470084c │ 1.41MB │
│ docker.io/library/minikube-local-cache-test │ functional-535239 │ f5d671b54a191 │ 30B    │
│ docker.io/kicbase/echo-server               │ functional-535239 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ b5f57ec6b9867 │ 50.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535239 image ls --format table --alsologtostderr:
I1002 20:59:26.255049  763142 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:26.255295  763142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:26.255328  763142 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:26.255350  763142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:26.255621  763142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:59:26.256251  763142 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:26.256496  763142 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:26.257050  763142 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:59:26.274546  763142 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:26.274605  763142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:59:26.296532  763142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:59:26.396138  763142 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E1002 21:00:55.463647  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535239 image ls --format json --alsologtostderr:
[{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-535239","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"05baa95f5142d8
7797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"74700000"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"83700000"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"71500000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7393e1470084c548aa540a52fed34d065d3783cf288577ba66d460437389d03f","repoDigests":[],"repoTags":["localhost/my-image:functional-535239"],"size":"1410000"},{"id":"f5d671b54a191cab3ba3566188093168c36b581bf66053d73e7917cd4de7fe8a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-535239"],"size":"30"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1
186cef97ff0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"50500000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535239 image ls --format json --alsologtostderr:
I1002 20:59:26.042286  763104 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:26.042399  763104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:26.042412  763104 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:26.042417  763104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:26.044127  763104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:59:26.045171  763104 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:26.045317  763104 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:26.045995  763104 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:59:26.064450  763104 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:26.064506  763104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:59:26.082067  763104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:59:26.180136  763104 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535239 image ls --format yaml --alsologtostderr:
- id: f5d671b54a191cab3ba3566188093168c36b581bf66053d73e7917cd4de7fe8a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-535239
size: "30"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "83700000"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "71500000"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "74700000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-535239
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "50500000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535239 image ls --format yaml --alsologtostderr:
I1002 20:59:22.333749  762819 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:22.333946  762819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.333959  762819 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:22.333964  762819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.334243  762819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:59:22.334896  762819 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.335057  762819 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.335552  762819 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:59:22.352518  762819 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:22.352621  762819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:59:22.371319  762819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:59:22.468503  762819 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh pgrep buildkitd: exit status 1 (275.920521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image build -t localhost/my-image:functional-535239 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-535239 image build -t localhost/my-image:functional-535239 testdata/build --alsologtostderr: (2.990391884s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535239 image build -t localhost/my-image:functional-535239 testdata/build --alsologtostderr:
I1002 20:59:22.823238  762929 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:22.823949  762929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.823992  762929 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:22.824013  762929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:22.824300  762929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
I1002 20:59:22.825003  762929 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.826809  762929 config.go:182] Loaded profile config "functional-535239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:59:22.827461  762929 cli_runner.go:164] Run: docker container inspect functional-535239 --format={{.State.Status}}
I1002 20:59:22.846147  762929 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:22.846222  762929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535239
I1002 20:59:22.865001  762929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/functional-535239/id_rsa Username:docker}
I1002 20:59:22.964188  762929 build_images.go:161] Building image from path: /tmp/build.217314554.tar
I1002 20:59:22.964262  762929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 20:59:22.972237  762929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.217314554.tar
I1002 20:59:22.975825  762929 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.217314554.tar: stat -c "%s %y" /var/lib/minikube/build/build.217314554.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.217314554.tar': No such file or directory
I1002 20:59:22.975853  762929 ssh_runner.go:362] scp /tmp/build.217314554.tar --> /var/lib/minikube/build/build.217314554.tar (3072 bytes)
I1002 20:59:22.993615  762929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.217314554
I1002 20:59:23.002914  762929 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.217314554 -xf /var/lib/minikube/build/build.217314554.tar
I1002 20:59:23.012853  762929 docker.go:361] Building image: /var/lib/minikube/build/build.217314554
I1002 20:59:23.013003  762929 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-535239 /var/lib/minikube/build/build.217314554
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:7393e1470084c548aa540a52fed34d065d3783cf288577ba66d460437389d03f done
#8 naming to localhost/my-image:functional-535239 done
#8 DONE 0.1s
I1002 20:59:25.738073  762929 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-535239 /var/lib/minikube/build/build.217314554: (2.725032424s)
I1002 20:59:25.738164  762929 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.217314554
I1002 20:59:25.745990  762929 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.217314554.tar
I1002 20:59:25.756773  762929 build_images.go:217] Built localhost/my-image:functional-535239 from /tmp/build.217314554.tar
I1002 20:59:25.756801  762929 build_images.go:133] succeeded building to: functional-535239
I1002 20:59:25.756863  762929 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-535239
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image load --daemon kicbase/echo-server:functional-535239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-535239 docker-env) && out/minikube-linux-arm64 status -p functional-535239"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-535239 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image load --daemon kicbase/echo-server:functional-535239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-535239
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image load --daemon kicbase/echo-server:functional-535239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image save kicbase/echo-server:functional-535239 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image rm kicbase/echo-server:functional-535239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-535239 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-535239 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nzng8" [d1340683-3411-4067-859d-fab9696f6866] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-nzng8" [d1340683-3411-4067-859d-fab9696f6866] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003354516s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-535239
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 image save --daemon kicbase/echo-server:functional-535239 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-535239
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 756910: os: process already finished
helpers_test.go:525: unable to kill pid 756779: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service list -o json
functional_test.go:1504: Took "354.725306ms" to run "out/minikube-linux-arm64 -p functional-535239 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30645
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30645
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "373.796958ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.730335ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "363.171462ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "68.690293ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdany-port883963750/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759438697326312470" to /tmp/TestFunctionalparallelMountCmdany-port883963750/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759438697326312470" to /tmp/TestFunctionalparallelMountCmdany-port883963750/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759438697326312470" to /tmp/TestFunctionalparallelMountCmdany-port883963750/001/test-1759438697326312470
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.276265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:58:17.681856  703895 retry.go:31] will retry after 307.943907ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 20:58 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 20:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 20:58 test-1759438697326312470
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh cat /mount-9p/test-1759438697326312470
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-535239 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b59e7923-f534-4e29-bea4-beb860ebdf87] Pending
helpers_test.go:352: "busybox-mount" [b59e7923-f534-4e29-bea4-beb860ebdf87] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b59e7923-f534-4e29-bea4-beb860ebdf87] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
I1002 20:58:23.412202  703895 retry.go:31] will retry after 9.048799778s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:352: "busybox-mount" [b59e7923-f534-4e29-bea4-beb860ebdf87] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005269013s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-535239 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdany-port883963750/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdspecific-port1395020004/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (431.298224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:58:25.496220  703895 retry.go:31] will retry after 694.665902ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdspecific-port1395020004/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh "sudo umount -f /mount-9p": exit status 1 (275.984021ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-535239 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdspecific-port1395020004/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T" /mount1: exit status 1 (633.840784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:58:27.864932  703895 retry.go:31] will retry after 348.401067ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535239 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-535239 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2891330289/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-535239 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-535239
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-535239
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-535239
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (157.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 21:03:49.231039  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.237447  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.248813  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.270235  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.311630  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.393050  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.554602  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:49.876260  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:50.518451  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:51.800136  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:54.361453  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:03:59.482889  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:04:09.724746  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:04:30.206472  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:11.167986  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:55.463941  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m36.339100315s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (157.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 kubectl -- rollout status deployment/busybox: (4.798985024s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-2vrtn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-5rrnt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-bpz5r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-2vrtn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-5rrnt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-bpz5r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-2vrtn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-5rrnt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-bpz5r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-2vrtn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-2vrtn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-5rrnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-5rrnt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-bpz5r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 kubectl -- exec busybox-7b57f96db7-bpz5r -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node add --alsologtostderr -v 5
E1002 21:06:33.090458  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 node add --alsologtostderr -v 5: (35.192064042s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5: (1.090884933s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-076837 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.042165081s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 status --output json --alsologtostderr -v 5: (1.109618602s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp testdata/cp-test.txt ha-076837:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3051268500/001/cp-test_ha-076837.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837:/home/docker/cp-test.txt ha-076837-m02:/home/docker/cp-test_ha-076837_ha-076837-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test_ha-076837_ha-076837-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837:/home/docker/cp-test.txt ha-076837-m03:/home/docker/cp-test_ha-076837_ha-076837-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test_ha-076837_ha-076837-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837:/home/docker/cp-test.txt ha-076837-m04:/home/docker/cp-test_ha-076837_ha-076837-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test_ha-076837_ha-076837-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp testdata/cp-test.txt ha-076837-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3051268500/001/cp-test_ha-076837-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m02:/home/docker/cp-test.txt ha-076837:/home/docker/cp-test_ha-076837-m02_ha-076837.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test_ha-076837-m02_ha-076837.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m02:/home/docker/cp-test.txt ha-076837-m03:/home/docker/cp-test_ha-076837-m02_ha-076837-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test_ha-076837-m02_ha-076837-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m02:/home/docker/cp-test.txt ha-076837-m04:/home/docker/cp-test_ha-076837-m02_ha-076837-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test_ha-076837-m02_ha-076837-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp testdata/cp-test.txt ha-076837-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3051268500/001/cp-test_ha-076837-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m03:/home/docker/cp-test.txt ha-076837:/home/docker/cp-test_ha-076837-m03_ha-076837.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test_ha-076837-m03_ha-076837.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m03:/home/docker/cp-test.txt ha-076837-m02:/home/docker/cp-test_ha-076837-m03_ha-076837-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test_ha-076837-m03_ha-076837-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m03:/home/docker/cp-test.txt ha-076837-m04:/home/docker/cp-test_ha-076837-m03_ha-076837-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test_ha-076837-m03_ha-076837-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp testdata/cp-test.txt ha-076837-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3051268500/001/cp-test_ha-076837-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m04:/home/docker/cp-test.txt ha-076837:/home/docker/cp-test_ha-076837-m04_ha-076837.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837 "sudo cat /home/docker/cp-test_ha-076837-m04_ha-076837.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m04:/home/docker/cp-test.txt ha-076837-m02:/home/docker/cp-test_ha-076837-m04_ha-076837-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m02 "sudo cat /home/docker/cp-test_ha-076837-m04_ha-076837-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 cp ha-076837-m04:/home/docker/cp-test.txt ha-076837-m03:/home/docker/cp-test_ha-076837-m04_ha-076837-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m04 "sudo cat /home/docker/cp-test.txt"
E1002 21:07:18.528682  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 ssh -n ha-076837-m03 "sudo cat /home/docker/cp-test_ha-076837-m04_ha-076837-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 node stop m02 --alsologtostderr -v 5: (11.17555701s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5: exit status 7 (743.903896ms)

                                                
                                                
-- stdout --
	ha-076837
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076837-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076837-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076837-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:07:30.354507  785949 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:30.354659  785949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:30.354669  785949 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:30.354674  785949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:30.355076  785949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 21:07:30.355332  785949 out.go:368] Setting JSON to false
	I1002 21:07:30.355387  785949 mustload.go:65] Loading cluster: ha-076837
	I1002 21:07:30.356070  785949 config.go:182] Loaded profile config "ha-076837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 21:07:30.356090  785949 status.go:174] checking status of ha-076837 ...
	I1002 21:07:30.356945  785949 cli_runner.go:164] Run: docker container inspect ha-076837 --format={{.State.Status}}
	I1002 21:07:30.357248  785949 notify.go:220] Checking for updates...
	I1002 21:07:30.382464  785949 status.go:371] ha-076837 host status = "Running" (err=<nil>)
	I1002 21:07:30.382495  785949 host.go:66] Checking if "ha-076837" exists ...
	I1002 21:07:30.382921  785949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076837
	I1002 21:07:30.403004  785949 host.go:66] Checking if "ha-076837" exists ...
	I1002 21:07:30.403299  785949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:07:30.403360  785949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076837
	I1002 21:07:30.420267  785949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/ha-076837/id_rsa Username:docker}
	I1002 21:07:30.514837  785949 ssh_runner.go:195] Run: systemctl --version
	I1002 21:07:30.521216  785949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:07:30.533892  785949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:30.602577  785949 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 21:07:30.59234625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:07:30.603122  785949 kubeconfig.go:125] found "ha-076837" server: "https://192.168.49.254:8443"
	I1002 21:07:30.603188  785949 api_server.go:166] Checking apiserver status ...
	I1002 21:07:30.603248  785949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:07:30.617097  785949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2123/cgroup
	I1002 21:07:30.626024  785949 api_server.go:182] apiserver freezer: "10:freezer:/docker/28471979bb4913ebc0c8b0795f161fa12536683adb0ab96674fa31e130d3d43e/kubepods/burstable/podf741c62ffd61f0ae051631492fdbef32/ef8b11bd48fd9c129c8252b390b5b2ab08c38a6cffe41a170e5986191c2fdf33"
	I1002 21:07:30.626128  785949 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/28471979bb4913ebc0c8b0795f161fa12536683adb0ab96674fa31e130d3d43e/kubepods/burstable/podf741c62ffd61f0ae051631492fdbef32/ef8b11bd48fd9c129c8252b390b5b2ab08c38a6cffe41a170e5986191c2fdf33/freezer.state
	I1002 21:07:30.634060  785949 api_server.go:204] freezer state: "THAWED"
	I1002 21:07:30.634092  785949 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:07:30.642572  785949 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:07:30.642647  785949 status.go:463] ha-076837 apiserver status = Running (err=<nil>)
	I1002 21:07:30.642666  785949 status.go:176] ha-076837 status: &{Name:ha-076837 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:30.642685  785949 status.go:174] checking status of ha-076837-m02 ...
	I1002 21:07:30.643037  785949 cli_runner.go:164] Run: docker container inspect ha-076837-m02 --format={{.State.Status}}
	I1002 21:07:30.661299  785949 status.go:371] ha-076837-m02 host status = "Stopped" (err=<nil>)
	I1002 21:07:30.661324  785949 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:30.661331  785949 status.go:176] ha-076837-m02 status: &{Name:ha-076837-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:30.661353  785949 status.go:174] checking status of ha-076837-m03 ...
	I1002 21:07:30.661731  785949 cli_runner.go:164] Run: docker container inspect ha-076837-m03 --format={{.State.Status}}
	I1002 21:07:30.679194  785949 status.go:371] ha-076837-m03 host status = "Running" (err=<nil>)
	I1002 21:07:30.679220  785949 host.go:66] Checking if "ha-076837-m03" exists ...
	I1002 21:07:30.679523  785949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076837-m03
	I1002 21:07:30.697045  785949 host.go:66] Checking if "ha-076837-m03" exists ...
	I1002 21:07:30.697362  785949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:07:30.697411  785949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076837-m03
	I1002 21:07:30.715103  785949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33555 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/ha-076837-m03/id_rsa Username:docker}
	I1002 21:07:30.811427  785949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:07:30.826649  785949 kubeconfig.go:125] found "ha-076837" server: "https://192.168.49.254:8443"
	I1002 21:07:30.826682  785949 api_server.go:166] Checking apiserver status ...
	I1002 21:07:30.826724  785949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:07:30.839578  785949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2119/cgroup
	I1002 21:07:30.848499  785949 api_server.go:182] apiserver freezer: "10:freezer:/docker/347f27e104918ab30dc8ab22f71ba13c00f43c1c8065bae4b8bd1f3262c60ba6/kubepods/burstable/pod5fa0ba9ca5cc7213c654ef2db8186421/4061f1859c5043f1b9de8aff45030b486f5c59d459a8a1f8b7fac866baa60270"
	I1002 21:07:30.848577  785949 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/347f27e104918ab30dc8ab22f71ba13c00f43c1c8065bae4b8bd1f3262c60ba6/kubepods/burstable/pod5fa0ba9ca5cc7213c654ef2db8186421/4061f1859c5043f1b9de8aff45030b486f5c59d459a8a1f8b7fac866baa60270/freezer.state
	I1002 21:07:30.856322  785949 api_server.go:204] freezer state: "THAWED"
	I1002 21:07:30.856350  785949 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:07:30.865201  785949 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:07:30.865231  785949 status.go:463] ha-076837-m03 apiserver status = Running (err=<nil>)
	I1002 21:07:30.865241  785949 status.go:176] ha-076837-m03 status: &{Name:ha-076837-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:30.865257  785949 status.go:174] checking status of ha-076837-m04 ...
	I1002 21:07:30.865609  785949 cli_runner.go:164] Run: docker container inspect ha-076837-m04 --format={{.State.Status}}
	I1002 21:07:30.882845  785949 status.go:371] ha-076837-m04 host status = "Running" (err=<nil>)
	I1002 21:07:30.882869  785949 host.go:66] Checking if "ha-076837-m04" exists ...
	I1002 21:07:30.883201  785949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076837-m04
	I1002 21:07:30.900607  785949 host.go:66] Checking if "ha-076837-m04" exists ...
	I1002 21:07:30.901052  785949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:07:30.901157  785949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076837-m04
	I1002 21:07:30.918855  785949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33560 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/ha-076837-m04/id_rsa Username:docker}
	I1002 21:07:31.023352  785949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:07:31.038774  785949 status.go:176] ha-076837-m04 status: &{Name:ha-076837-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 node start m02 --alsologtostderr -v 5: (45.425061954s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5: (1.35361129s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (46.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.117062698s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (189.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 stop --alsologtostderr -v 5
E1002 21:08:49.231462  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 stop --alsologtostderr -v 5: (33.918277085s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 start --wait true --alsologtostderr -v 5
E1002 21:09:16.931797  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:55.463781  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 start --wait true --alsologtostderr -v 5: (2m35.788592384s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (189.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 node delete m03 --alsologtostderr -v 5: (10.451876762s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 stop --alsologtostderr -v 5: (32.657702447s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5: exit status 7 (114.345473ms)

                                                
                                                
-- stdout --
	ha-076837
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076837-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076837-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:12:14.651476  813348 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:12:14.651590  813348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:14.651601  813348 out.go:374] Setting ErrFile to fd 2...
	I1002 21:12:14.651606  813348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:14.651880  813348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 21:12:14.652057  813348 out.go:368] Setting JSON to false
	I1002 21:12:14.652107  813348 mustload.go:65] Loading cluster: ha-076837
	I1002 21:12:14.652176  813348 notify.go:220] Checking for updates...
	I1002 21:12:14.653367  813348 config.go:182] Loaded profile config "ha-076837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 21:12:14.653392  813348 status.go:174] checking status of ha-076837 ...
	I1002 21:12:14.654091  813348 cli_runner.go:164] Run: docker container inspect ha-076837 --format={{.State.Status}}
	I1002 21:12:14.671382  813348 status.go:371] ha-076837 host status = "Stopped" (err=<nil>)
	I1002 21:12:14.671405  813348 status.go:384] host is not running, skipping remaining checks
	I1002 21:12:14.671412  813348 status.go:176] ha-076837 status: &{Name:ha-076837 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:12:14.671443  813348 status.go:174] checking status of ha-076837-m02 ...
	I1002 21:12:14.671749  813348 cli_runner.go:164] Run: docker container inspect ha-076837-m02 --format={{.State.Status}}
	I1002 21:12:14.693204  813348 status.go:371] ha-076837-m02 host status = "Stopped" (err=<nil>)
	I1002 21:12:14.693227  813348 status.go:384] host is not running, skipping remaining checks
	I1002 21:12:14.693245  813348 status.go:176] ha-076837-m02 status: &{Name:ha-076837-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:12:14.693269  813348 status.go:174] checking status of ha-076837-m04 ...
	I1002 21:12:14.693636  813348 cli_runner.go:164] Run: docker container inspect ha-076837-m04 --format={{.State.Status}}
	I1002 21:12:14.714532  813348 status.go:371] ha-076837-m04 host status = "Stopped" (err=<nil>)
	I1002 21:12:14.714551  813348 status.go:384] host is not running, skipping remaining checks
	I1002 21:12:14.714558  813348 status.go:176] ha-076837-m04 status: &{Name:ha-076837-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 21:13:49.230693  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m42.548645131s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 node add --control-plane --alsologtostderr -v 5: (1m0.386148506s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-076837 status --alsologtostderr -v 5: (1.481560961s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.153237032s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-866266 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-866266 --driver=docker  --container-runtime=docker: (31.594540273s)
--- PASS: TestImageBuild/serial/Setup (31.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-866266
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-866266: (1.836709839s)
--- PASS: TestImageBuild/serial/NormalBuild (1.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-866266
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-866266: (1.057331239s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-866266
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-866266
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-866266: (1.026008516s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-411347 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1002 21:15:55.463750  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-411347 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m20.932428119s)
--- PASS: TestJSONOutput/start/Command (80.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-411347 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-411347 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-411347 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-411347 --output=json --user=testUser: (11.053723817s)
--- PASS: TestJSONOutput/stop/Command (11.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-543899 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-543899 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.092836ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4918ad8d-7bc5-4b92-9afd-29825af0602e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-543899] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8fcba18-53b9-4a56-9214-08699b459541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"38dcbd03-a795-4080-9c0d-c905a1ae56cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2d3c1328-37e4-4e6e-9cf2-c42df3c17d7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig"}}
	{"specversion":"1.0","id":"a6b1daa5-e981-4aa7-a53b-5a484bd8aaba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube"}}
	{"specversion":"1.0","id":"65e5a3d2-aa47-43dd-8d55-c89582a587c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a1a9d296-c402-436f-9dab-3a5331971bcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"487332f6-baa0-4ba5-b4e7-64167a722745","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-543899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-543899
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-208668 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-208668 --network=: (34.613574383s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-208668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-208668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-208668: (2.323554643s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-471320 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-471320 --network=bridge: (33.5384241s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-471320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-471320
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-471320: (2.040526573s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                    
x
+
TestKicExistingNetwork (34.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:18:36.770003  703895 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:18:36.784190  703895 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:18:36.784309  703895 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:18:36.784332  703895 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:18:36.799947  703895 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:18:36.799981  703895 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:18:36.799997  703895 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:18:36.800140  703895 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:18:36.817033  703895 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee0a77aa20d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:50:ae:2a:40:06} reservation:<nil>}
I1002 21:18:36.817342  703895 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400180f180}
I1002 21:18:36.817371  703895 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 21:18:36.817480  703895 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:18:36.876422  703895 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-446214 --network=existing-network
E1002 21:18:49.230893  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-446214 --network=existing-network: (31.974627672s)
helpers_test.go:175: Cleaning up "existing-network-446214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-446214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-446214: (2.065254028s)
I1002 21:19:10.933052  703895 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.18s)

                                                
                                    
x
+
TestKicCustomSubnet (35.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-892342 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-892342 --subnet=192.168.60.0/24: (33.1726529s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-892342 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-892342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-892342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-892342: (2.185731782s)
--- PASS: TestKicCustomSubnet (35.38s)

                                                
                                    
x
+
TestKicStaticIP (37.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-882757 --static-ip=192.168.200.200
E1002 21:20:12.293622  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-882757 --static-ip=192.168.200.200: (35.04276563s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-882757 ip
helpers_test.go:175: Cleaning up "static-ip-882757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-882757
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-882757: (2.034537571s)
--- PASS: TestKicStaticIP (37.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (81.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-136424 --driver=docker  --container-runtime=docker
E1002 21:20:55.464436  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-136424 --driver=docker  --container-runtime=docker: (35.013851493s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-138869 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-138869 --driver=docker  --container-runtime=docker: (40.614765461s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-136424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-138869
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-138869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-138869
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-138869: (2.200353388s)
helpers_test.go:175: Cleaning up "first-136424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-136424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-136424: (2.277911298s)
--- PASS: TestMinikubeProfile (81.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-651170 --memory=3072 --mount-string /tmp/TestMountStartserial2816710047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-651170 --memory=3072 --mount-string /tmp/TestMountStartserial2816710047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.298658267s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-651170 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-653582 --memory=3072 --mount-string /tmp/TestMountStartserial2816710047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-653582 --memory=3072 --mount-string /tmp/TestMountStartserial2816710047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.601271793s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-653582 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-651170 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-651170 --alsologtostderr -v=5: (1.486374657s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-653582 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-653582
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-653582: (1.22265623s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-653582
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-653582: (7.598857276s)
--- PASS: TestMountStart/serial/RestartStopped (8.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-653582 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719662 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 21:23:49.230809  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719662 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.342245262s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-719662 -- rollout status deployment/busybox: (3.717946025s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-c6qxt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-gsc2j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-c6qxt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-gsc2j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-c6qxt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-gsc2j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-c6qxt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-c6qxt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-gsc2j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719662 -- exec busybox-7b57f96db7-gsc2j -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-719662 -v=5 --alsologtostderr
E1002 21:23:58.530650  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-719662 -v=5 --alsologtostderr: (34.557419153s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-719662 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp testdata/cp-test.txt multinode-719662:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4140908176/001/cp-test_multinode-719662.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662:/home/docker/cp-test.txt multinode-719662-m02:/home/docker/cp-test_multinode-719662_multinode-719662-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test_multinode-719662_multinode-719662-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662:/home/docker/cp-test.txt multinode-719662-m03:/home/docker/cp-test_multinode-719662_multinode-719662-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test_multinode-719662_multinode-719662-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp testdata/cp-test.txt multinode-719662-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4140908176/001/cp-test_multinode-719662-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m02:/home/docker/cp-test.txt multinode-719662:/home/docker/cp-test_multinode-719662-m02_multinode-719662.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test_multinode-719662-m02_multinode-719662.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m02:/home/docker/cp-test.txt multinode-719662-m03:/home/docker/cp-test_multinode-719662-m02_multinode-719662-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test_multinode-719662-m02_multinode-719662-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp testdata/cp-test.txt multinode-719662-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4140908176/001/cp-test_multinode-719662-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m03:/home/docker/cp-test.txt multinode-719662:/home/docker/cp-test_multinode-719662-m03_multinode-719662.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662 "sudo cat /home/docker/cp-test_multinode-719662-m03_multinode-719662.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 cp multinode-719662-m03:/home/docker/cp-test.txt multinode-719662-m02:/home/docker/cp-test_multinode-719662-m03_multinode-719662-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 ssh -n multinode-719662-m02 "sudo cat /home/docker/cp-test_multinode-719662-m03_multinode-719662-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-719662 node stop m03: (1.244362989s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719662 status: exit status 7 (540.356693ms)

                                                
                                                
-- stdout --
	multinode-719662
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-719662-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-719662-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr: exit status 7 (640.153226ms)

                                                
                                                
-- stdout --
	multinode-719662
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-719662-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-719662-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:24:44.820619  886782 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:24:44.820792  886782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:24:44.820804  886782 out.go:374] Setting ErrFile to fd 2...
	I1002 21:24:44.820809  886782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:24:44.821067  886782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 21:24:44.821260  886782 out.go:368] Setting JSON to false
	I1002 21:24:44.821308  886782 mustload.go:65] Loading cluster: multinode-719662
	I1002 21:24:44.821482  886782 notify.go:220] Checking for updates...
	I1002 21:24:44.821728  886782 config.go:182] Loaded profile config "multinode-719662": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 21:24:44.821746  886782 status.go:174] checking status of multinode-719662 ...
	I1002 21:24:44.822266  886782 cli_runner.go:164] Run: docker container inspect multinode-719662 --format={{.State.Status}}
	I1002 21:24:44.842223  886782 status.go:371] multinode-719662 host status = "Running" (err=<nil>)
	I1002 21:24:44.842251  886782 host.go:66] Checking if "multinode-719662" exists ...
	I1002 21:24:44.842561  886782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-719662
	I1002 21:24:44.868414  886782 host.go:66] Checking if "multinode-719662" exists ...
	I1002 21:24:44.868735  886782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:24:44.868796  886782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-719662
	I1002 21:24:44.888871  886782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33670 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/multinode-719662/id_rsa Username:docker}
	I1002 21:24:44.983097  886782 ssh_runner.go:195] Run: systemctl --version
	I1002 21:24:44.989604  886782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:45.013176  886782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:24:45.126700  886782 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:24:45.113011481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:24:45.127367  886782 kubeconfig.go:125] found "multinode-719662" server: "https://192.168.67.2:8443"
	I1002 21:24:45.127420  886782 api_server.go:166] Checking apiserver status ...
	I1002 21:24:45.127475  886782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:24:45.146075  886782 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2166/cgroup
	I1002 21:24:45.159156  886782 api_server.go:182] apiserver freezer: "10:freezer:/docker/0676bc9f735d45871f0c5f6534151dfac63817aea61fe544388a0a54c0634127/kubepods/burstable/pod92ee0c346bb983189065dd96bc252eaa/0617cb26aed325b703d770f191df3041f1ff3e825bbd8d9122d80be1689d1c7e"
	I1002 21:24:45.159251  886782 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0676bc9f735d45871f0c5f6534151dfac63817aea61fe544388a0a54c0634127/kubepods/burstable/pod92ee0c346bb983189065dd96bc252eaa/0617cb26aed325b703d770f191df3041f1ff3e825bbd8d9122d80be1689d1c7e/freezer.state
	I1002 21:24:45.170752  886782 api_server.go:204] freezer state: "THAWED"
	I1002 21:24:45.170796  886782 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 21:24:45.179776  886782 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 21:24:45.179817  886782 status.go:463] multinode-719662 apiserver status = Running (err=<nil>)
	I1002 21:24:45.179831  886782 status.go:176] multinode-719662 status: &{Name:multinode-719662 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:24:45.179871  886782 status.go:174] checking status of multinode-719662-m02 ...
	I1002 21:24:45.180316  886782 cli_runner.go:164] Run: docker container inspect multinode-719662-m02 --format={{.State.Status}}
	I1002 21:24:45.211851  886782 status.go:371] multinode-719662-m02 host status = "Running" (err=<nil>)
	I1002 21:24:45.211882  886782 host.go:66] Checking if "multinode-719662-m02" exists ...
	I1002 21:24:45.213315  886782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-719662-m02
	I1002 21:24:45.235183  886782 host.go:66] Checking if "multinode-719662-m02" exists ...
	I1002 21:24:45.235638  886782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:24:45.235706  886782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-719662-m02
	I1002 21:24:45.262225  886782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/21682-702037/.minikube/machines/multinode-719662-m02/id_rsa Username:docker}
	I1002 21:24:45.370448  886782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:45.384303  886782 status.go:176] multinode-719662-m02 status: &{Name:multinode-719662-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:24:45.384352  886782 status.go:174] checking status of multinode-719662-m03 ...
	I1002 21:24:45.384676  886782 cli_runner.go:164] Run: docker container inspect multinode-719662-m03 --format={{.State.Status}}
	I1002 21:24:45.402548  886782 status.go:371] multinode-719662-m03 host status = "Stopped" (err=<nil>)
	I1002 21:24:45.402622  886782 status.go:384] host is not running, skipping remaining checks
	I1002 21:24:45.402673  886782 status.go:176] multinode-719662-m03 status: &{Name:multinode-719662-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-719662 node start m03 -v=5 --alsologtostderr: (8.488693335s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719662
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-719662
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-719662: (22.803394838s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719662 --wait=true -v=5 --alsologtostderr
E1002 21:25:55.464049  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719662 --wait=true -v=5 --alsologtostderr: (55.939930576s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719662
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-719662 node delete m03: (4.974202872s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-719662 stop: (21.648284061s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719662 status: exit status 7 (95.325428ms)

                                                
                                                
-- stdout --
	multinode-719662
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-719662-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr: exit status 7 (94.568584ms)

                                                
                                                
-- stdout --
	multinode-719662
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-719662-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:26:41.017694  900433 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:26:41.017813  900433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:41.017825  900433 out.go:374] Setting ErrFile to fd 2...
	I1002 21:26:41.017830  900433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:41.018080  900433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-702037/.minikube/bin
	I1002 21:26:41.018267  900433 out.go:368] Setting JSON to false
	I1002 21:26:41.018313  900433 mustload.go:65] Loading cluster: multinode-719662
	I1002 21:26:41.018412  900433 notify.go:220] Checking for updates...
	I1002 21:26:41.018705  900433 config.go:182] Loaded profile config "multinode-719662": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 21:26:41.018725  900433 status.go:174] checking status of multinode-719662 ...
	I1002 21:26:41.019584  900433 cli_runner.go:164] Run: docker container inspect multinode-719662 --format={{.State.Status}}
	I1002 21:26:41.038817  900433 status.go:371] multinode-719662 host status = "Stopped" (err=<nil>)
	I1002 21:26:41.038843  900433 status.go:384] host is not running, skipping remaining checks
	I1002 21:26:41.038850  900433 status.go:176] multinode-719662 status: &{Name:multinode-719662 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:26:41.038874  900433 status.go:174] checking status of multinode-719662-m02 ...
	I1002 21:26:41.039186  900433 cli_runner.go:164] Run: docker container inspect multinode-719662-m02 --format={{.State.Status}}
	I1002 21:26:41.059092  900433 status.go:371] multinode-719662-m02 host status = "Stopped" (err=<nil>)
	I1002 21:26:41.059121  900433 status.go:384] host is not running, skipping remaining checks
	I1002 21:26:41.059136  900433 status.go:176] multinode-719662-m02 status: &{Name:multinode-719662-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719662 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719662 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (53.13247154s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719662 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719662
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719662-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-719662-m02 --driver=docker  --container-runtime=docker: exit status 14 (109.020703ms)

                                                
                                                
-- stdout --
	* [multinode-719662-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-719662-m02' is duplicated with machine name 'multinode-719662-m02' in profile 'multinode-719662'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719662-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719662-m03 --driver=docker  --container-runtime=docker: (37.716711734s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-719662
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-719662: exit status 80 (337.214766ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-719662 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-719662-m03 already exists in multinode-719662-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-719662-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-719662-m03: (2.071448885s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.29s)

                                                
                                    
x
+
TestPreload (126.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-136115 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1002 21:28:49.231145  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-136115 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (53.773646875s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-136115 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-136115 image pull gcr.io/k8s-minikube/busybox: (2.494526635s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-136115
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-136115: (5.758965438s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-136115 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-136115 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m2.147243457s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-136115 image list
helpers_test.go:175: Cleaning up "test-preload-136115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-136115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-136115: (2.213781689s)
--- PASS: TestPreload (126.61s)

                                                
                                    
x
+
TestScheduledStopUnix (107.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-295560 --memory=3072 --driver=docker  --container-runtime=docker
E1002 21:30:55.464209  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-295560 --memory=3072 --driver=docker  --container-runtime=docker: (34.5187252s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-295560 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-295560 -n scheduled-stop-295560
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-295560 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 21:31:00.893558  703895 retry.go:31] will retry after 136.164µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.894761  703895 retry.go:31] will retry after 121.428µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.895880  703895 retry.go:31] will retry after 248.777µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.896996  703895 retry.go:31] will retry after 387.108µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.898076  703895 retry.go:31] will retry after 509.403µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.899159  703895 retry.go:31] will retry after 1.064577ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.901314  703895 retry.go:31] will retry after 663.636µs: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.902398  703895 retry.go:31] will retry after 1.074849ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.904541  703895 retry.go:31] will retry after 3.751985ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.908736  703895 retry.go:31] will retry after 2.122051ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.911944  703895 retry.go:31] will retry after 6.965917ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.919162  703895 retry.go:31] will retry after 9.237693ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.929413  703895 retry.go:31] will retry after 12.97479ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.942694  703895 retry.go:31] will retry after 15.971198ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.958932  703895 retry.go:31] will retry after 21.499488ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
I1002 21:31:00.981176  703895 retry.go:31] will retry after 50.225137ms: open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/scheduled-stop-295560/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-295560 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-295560 -n scheduled-stop-295560
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-295560
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-295560 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-295560
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-295560: exit status 7 (71.548305ms)

                                                
                                                
-- stdout --
	scheduled-stop-295560
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-295560 -n scheduled-stop-295560
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-295560 -n scheduled-stop-295560: exit status 7 (69.28941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-295560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-295560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-295560: (1.666322721s)
--- PASS: TestScheduledStopUnix (107.79s)

                                                
                                    
x
+
TestSkaffold (149.04s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3560760988 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-208583 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-208583 --memory=3072 --driver=docker  --container-runtime=docker: (33.0812656s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3560760988 run --minikube-profile skaffold-208583 --kube-context skaffold-208583 --status-check=true --port-forward=false --interactive=false
E1002 21:33:49.230579  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3560760988 run --minikube-profile skaffold-208583 --kube-context skaffold-208583 --status-check=true --port-forward=false --interactive=false: (1m31.942370479s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-545b6f56c8-l4q2f" [ab671dba-854b-47b2-9d35-d077b6e3a84f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003281729s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-79d7b747fd-82c4j" [24f6fb45-44e4-48af-a53c-0655401f1ae2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002893818s
helpers_test.go:175: Cleaning up "skaffold-208583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-208583
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-208583: (3.011969896s)
--- PASS: TestSkaffold (149.04s)

                                                
                                    
x
+
TestInsufficientStorage (13.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-608303 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-608303 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.340360367s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d3ead67-95d5-4284-a0d2-d3cd76f9408c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-608303] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae8a38c5-6d19-4fba-9d50-b910adba4574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"88539030-c0b6-4e1b-9664-16db90ae0ff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ceacb028-ba0f-4b81-9664-0f4d44e4df5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig"}}
	{"specversion":"1.0","id":"05f7af6c-3fa7-4707-919c-6a9df2288c86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube"}}
	{"specversion":"1.0","id":"110e4542-c8f6-4f31-81dc-6f51f2791910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"67ab780d-9b53-42d4-a258-8d7febaab2cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97342b0c-8236-446b-9819-241d66250235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8f6504de-655d-4141-91a4-2ebab2ccbf13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a7b456cd-64ac-4016-bc66-9b2aaf509adf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6cac2ad-1203-43cf-87c0-48b8849e6abf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9d503809-c65d-4b83-b1ad-71592987dda3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-608303\" primary control-plane node in \"insufficient-storage-608303\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a30f4b54-b4ad-4fa0-bc21-999234a242de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d0d55ed-1b8f-4d86-a12a-8e79f436228b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd6e83af-e626-4504-8878-5c5feafc9513","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-608303 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-608303 --output=json --layout=cluster: exit status 7 (300.274498ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-608303","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-608303","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:34:54.292894  934379 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-608303" does not appear in /home/jenkins/minikube-integration/21682-702037/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-608303 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-608303 --output=json --layout=cluster: exit status 7 (297.808364ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-608303","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-608303","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:34:54.588740  934445 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-608303" does not appear in /home/jenkins/minikube-integration/21682-702037/kubeconfig
	E1002 21:34:54.598681  934445 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/insufficient-storage-608303/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-608303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-608303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-608303: (1.711210303s)
--- PASS: TestInsufficientStorage (13.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1634887024 start -p running-upgrade-351762 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1002 21:38:49.230954  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.625322  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.631980  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.643353  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.664819  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.706218  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.787738  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:28.949258  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:29.270968  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:29.912571  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:31.193869  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:33.755137  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1634887024 start -p running-upgrade-351762 --memory=3072 --vm-driver=docker  --container-runtime=docker: (57.20246678s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-351762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 21:39:38.876682  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:39:49.118152  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-351762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.302326116s)
helpers_test.go:175: Cleaning up "running-upgrade-351762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-351762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-351762: (2.290705588s)
--- PASS: TestRunningBinaryUpgrade (88.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.908086727s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-361605
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-361605: (11.120727797s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-361605 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-361605 status --format={{.Host}}: exit status 7 (75.674761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m43.608623356s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-361605 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (149.295832ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-361605] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-361605
	    minikube start -p kubernetes-upgrade-361605 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3616052 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-361605 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-361605 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.823003441s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-361605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-361605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-361605: (2.836958025s)
--- PASS: TestKubernetesUpgrade (388.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (106.36s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E1002 21:40:09.599642  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.875101862 start -p missing-upgrade-548357 --memory=3072 --driver=docker  --container-runtime=docker
E1002 21:40:38.533556  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.875101862 start -p missing-upgrade-548357 --memory=3072 --driver=docker  --container-runtime=docker: (30.065521781s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-548357
E1002 21:40:50.562112  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:40:55.464197  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-548357: (10.490602771s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-548357
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-548357 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-548357 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.104680043s)
helpers_test.go:175: Cleaning up "missing-upgrade-548357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-548357
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-548357: (2.597208714s)
--- PASS: TestMissingContainerUpgrade (106.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (106.914643ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318177] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-702037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-702037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.705192908s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318177 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 21:35:55.464424  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (17.760707962s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318177 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-318177 status -o json: exit status 2 (332.206964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-318177","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-318177
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-318177: (1.72154007s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318177 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (10.229196769s)
--- PASS: TestNoKubernetes/serial/Start (10.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318177 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318177 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.383186ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-318177
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-318177: (1.232911687s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318177 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318177 --driver=docker  --container-runtime=docker: (8.28602963s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318177 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318177 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.665628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (7.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (7.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4128034027 start -p stopped-upgrade-515415 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1002 21:42:12.484556  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4128034027 start -p stopped-upgrade-515415 --memory=3072 --vm-driver=docker  --container-runtime=docker: (38.05365363s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4128034027 -p stopped-upgrade-515415 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4128034027 -p stopped-upgrade-515415 stop: (12.063425039s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-515415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-515415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.322545838s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-515415
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-515415: (1.227666836s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (76.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-709436 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1002 21:43:49.230657  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:44:28.623649  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-709436 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m16.726414615s)
--- PASS: TestPause/serial/Start (76.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-709436 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 21:44:56.326065  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-709436 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.105438394s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-709436 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-709436 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-709436 --output=json --layout=cluster: exit status 2 (314.069466ms)

                                                
                                                
-- stdout --
	{"Name":"pause-709436","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-709436","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-709436 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-709436 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-709436 --alsologtostderr -v=5: (1.097927911s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-709436 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-709436 --alsologtostderr -v=5: (2.189891026s)
--- PASS: TestPause/serial/DeletePaused (2.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-709436
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-709436: exit status 1 (21.079336ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-709436: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1002 21:45:55.464355  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m15.072912057s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-174412 "pgrep -a kubelet"
I1002 21:46:48.771120  703895 config.go:182] Loaded profile config "auto-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gln2b" [656feaaa-bd3d-4510-adc0-0ee7da3b76d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gln2b" [656feaaa-bd3d-4510-adc0-0ee7da3b76d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.0051304s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m4.181259741s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.855348054s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-llq87" [172a7261-4199-40cb-b24a-7136a9a87e86] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004200881s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-174412 "pgrep -a kubelet"
I1002 21:48:36.754150  703895 config.go:182] Loaded profile config "kindnet-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-174412 replace --force -f testdata/netcat-deployment.yaml
I1002 21:48:37.094805  703895 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b7msz" [0389d61b-67a7-45af-bca1-a655e88b2966] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b7msz" [0389d61b-67a7-45af-bca1-a655e88b2966] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003472085s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (57.878831338s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-k9kvq" [b1fcb570-7d11-48ec-9e15-71b3ac820c47] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1002 21:49:28.624115  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004580058s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-174412 "pgrep -a kubelet"
I1002 21:49:31.793645  703895 config.go:182] Loaded profile config "calico-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9m6np" [804988e5-2f80-438a-bd09-cde8e43c7d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9m6np" [804988e5-2f80-438a-bd09-cde8e43c7d87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004927345s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m20.456871922s)
--- PASS: TestNetworkPlugins/group/false/Start (80.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-174412 "pgrep -a kubelet"
I1002 21:50:13.150987  703895 config.go:182] Loaded profile config "custom-flannel-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g79xh" [86db8044-8da6-41ef-9b84-24a5bdab3ade] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g79xh" [86db8044-8da6-41ef-9b84-24a5bdab3ade] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004212671s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1002 21:50:55.464056  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m15.274273744s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-174412 "pgrep -a kubelet"
I1002 21:51:31.804661  703895 config.go:182] Loaded profile config "false-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5n6n5" [637f955e-9b21-4a03-bffa-4603f49e6b24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5n6n5" [637f955e-9b21-4a03-bffa-4603f49e6b24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004978687s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (58.11969536s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-174412 "pgrep -a kubelet"
I1002 21:52:08.751672  703895 config.go:182] Loaded profile config "enable-default-cni-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2ppw2" [37ab8829-eb09-4cde-8ca1-5e8a7a03baaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:52:09.537952  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/auto-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2ppw2" [37ab8829-eb09-4cde-8ca1-5e8a7a03baaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004116121s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m14.390489052s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-s9sgb" [f5b8b00e-62e5-4b5d-809b-d5dea30c6531] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00613722s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-174412 "pgrep -a kubelet"
I1002 21:53:08.105556  703895 config.go:182] Loaded profile config "flannel-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zbmdw" [73c67d11-a009-4b96-bad3-7a10dafc94c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:53:10.994616  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/auto-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zbmdw" [73c67d11-a009-4b96-bad3-7a10dafc94c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003887079s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (81.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1002 21:53:49.231163  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:53:50.837772  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-174412 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m21.181680643s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (81.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-174412 "pgrep -a kubelet"
I1002 21:54:00.552267  703895 config.go:182] Loaded profile config "bridge-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v5rgv" [7a30ac59-1bef-487d-ae16-61f1551068dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v5rgv" [7a30ac59-1bef-487d-ae16-61f1551068dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004301002s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-174412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1002 21:54:11.319531  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (91.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-199211 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1002 21:54:45.890353  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:54:52.280918  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-199211 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m31.925791565s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (91.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-174412 "pgrep -a kubelet"
I1002 21:55:04.625834  703895 config.go:182] Loaded profile config "kubenet-174412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-174412 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bbmmn" [887b8682-0e48-4e65-b973-d16837d81be3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:55:06.371946  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bbmmn" [887b8682-0e48-4e65-b973-d16837d81be3] Running
E1002 21:55:13.480905  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.487387  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.498854  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.520284  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.561713  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.643214  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:13.804942  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:14.126652  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:14.768917  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004583753s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-174412 exec deployment/netcat -- nslookup kubernetes.default
E1002 21:55:16.050863  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-174412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-096259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:55:47.334274  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:51.687901  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:54.460010  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:55:55.463807  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-096259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m26.792317331s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-199211 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5c478a6-857c-4070-bc47-c03c9907b057] Pending
helpers_test.go:352: "busybox" [d5c478a6-857c-4070-bc47-c03c9907b057] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5c478a6-857c-4070-bc47-c03c9907b057] Running
E1002 21:56:14.202287  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004157096s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-199211 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-199211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-199211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.234995809s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-199211 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-199211 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-199211 --alsologtostderr -v=3: (11.325650097s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-199211 -n old-k8s-version-199211
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-199211 -n old-k8s-version-199211: exit status 7 (71.753585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-199211 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (29.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-199211 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1002 21:56:32.176374  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.185375  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.198830  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.220508  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.261980  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.343324  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.504772  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:32.826408  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.468047  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:34.749487  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:35.421361  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:37.311732  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:42.434041  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:49.039015  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/auto-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:52.675334  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-199211 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (28.908461838s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-199211 -n old-k8s-version-199211
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (29.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfz2j" [3da2c259-a8f8-424b-a941-f235a90c392c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfz2j" [3da2c259-a8f8-424b-a941-f235a90c392c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.014932158s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-096259 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b4614941-81e9-4ca5-8fd0-3cdfd5ec32d7] Pending
helpers_test.go:352: "busybox" [b4614941-81e9-4ca5-8fd0-3cdfd5ec32d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 21:57:09.092177  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.098591  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.110044  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.131572  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.173074  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.254899  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.256006  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.416525  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:09.738231  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b4614941-81e9-4ca5-8fd0-3cdfd5ec32d7] Running
E1002 21:57:10.379529  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:11.661827  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004671288s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-096259 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfz2j" [3da2c259-a8f8-424b-a941-f235a90c392c] Running
E1002 21:57:13.156669  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:14.223509  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004417094s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-199211 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-096259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 21:57:16.758383  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/auto-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-096259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029573699s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-096259 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-096259 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-096259 --alsologtostderr -v=3: (12.054949205s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-199211 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-199211 --alsologtostderr -v=1
E1002 21:57:18.535588  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-199211 -n old-k8s-version-199211
E1002 21:57:19.345820  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-199211 -n old-k8s-version-199211: exit status 2 (321.188736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-199211 -n old-k8s-version-199211
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-199211 -n old-k8s-version-199211: exit status 2 (322.360033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-199211 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-199211 -n old-k8s-version-199211
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-199211 -n old-k8s-version-199211
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-911307 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:57:29.587528  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-911307 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m22.513594056s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096259 -n no-preload-096259
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096259 -n no-preload-096259: exit status 7 (96.846915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-096259 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-096259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:57:50.069638  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:54.118633  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:57.342647  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.694757  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.701113  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.712436  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.733639  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.774995  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:01.856410  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:02.017747  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:02.339728  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:02.981229  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:04.263319  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:06.824798  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:11.946698  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:22.188065  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-096259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m0.177654709s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096259 -n no-preload-096259
E1002 21:58:30.342779  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s2rnv" [3c1883ea-1a4f-4fb1-8c20-93fb5148f2a3] Running
E1002 21:58:31.031466  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004109521s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s2rnv" [3c1883ea-1a4f-4fb1-8c20-93fb5148f2a3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00349484s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-096259 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-096259 image list --format=json
E1002 21:58:42.669605  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-096259 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096259 -n no-preload-096259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096259 -n no-preload-096259: exit status 2 (338.370752ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096259 -n no-preload-096259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096259 -n no-preload-096259: exit status 2 (328.833422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-096259 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096259 -n no-preload-096259
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096259 -n no-preload-096259
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-911307 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8fe18870-8015-4871-aa0c-114183fc7831] Pending
helpers_test.go:352: "busybox" [8fe18870-8015-4871-aa0c-114183fc7831] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8fe18870-8015-4871-aa0c-114183fc7831] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003724236s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-911307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-544473 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:58:49.233593  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/functional-535239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-544473 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m16.079738015s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-911307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-911307 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-911307 --alsologtostderr -v=3
E1002 21:58:58.044133  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kindnet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:00.868650  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:00.875032  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:00.886507  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:00.908027  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:00.950047  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:01.031646  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:01.193796  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:01.515948  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:02.158084  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:03.439563  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:06.002139  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-911307 --alsologtostderr -v=3: (11.12265902s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-911307 -n embed-certs-911307
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-911307 -n embed-certs-911307: exit status 7 (109.84279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-911307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-911307 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:59:11.124171  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:16.039887  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:21.366247  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:23.631395  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:25.389734  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:28.624643  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/skaffold-208583/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:41.847995  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:52.952916  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/enable-default-cni-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:59:53.097681  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/calico-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-911307 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (56.244340303s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-911307 -n embed-certs-911307
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544473 create -f testdata/busybox.yaml
E1002 22:00:05.001552  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:05.008085  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:05.020478  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:05.041878  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:05.083321  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:05.165476  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2d6ca747-3e88-4247-b102-edf72b78ffdc] Pending
E1002 22:00:05.327260  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [2d6ca747-3e88-4247-b102-edf72b78ffdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 22:00:06.291503  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:07.573541  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [2d6ca747-3e88-4247-b102-edf72b78ffdc] Running
E1002 22:00:10.135175  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003959911s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544473 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7thvh" [37e0385d-1d46-4525-8e98-30c094ceef45] Running
E1002 22:00:05.649288  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003580996s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7thvh" [37e0385d-1d46-4525-8e98-30c094ceef45] Running
E1002 22:00:13.481031  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003273937s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-911307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-544473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 22:00:15.257030  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-544473 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-544473 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-544473 --alsologtostderr -v=3: (12.132596073s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-911307 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-911307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-911307 -n embed-certs-911307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-911307 -n embed-certs-911307: exit status 2 (316.115669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-911307 -n embed-certs-911307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-911307 -n embed-certs-911307: exit status 2 (328.529336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-911307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-911307 -n embed-certs-911307
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-911307 -n embed-certs-911307
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-342599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 22:00:22.809470  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:25.498436  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-342599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (49.247635143s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473: exit status 7 (96.357796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-544473 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-544473 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 22:00:41.184283  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/custom-flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:45.553466  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/flannel-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:45.980439  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:00:55.464264  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/addons-991638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.219271  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.225622  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.237007  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.258460  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.299895  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.382240  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.544524  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:08.866242  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:09.507488  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:10.789492  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-544473 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m0.531426072s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
E1002 22:01:28.714774  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-342599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-342599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.131835667s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-342599 --alsologtostderr -v=3
E1002 22:01:13.351612  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:18.472896  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-342599 --alsologtostderr -v=3: (11.095910306s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-342599 -n newest-cni-342599
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-342599 -n newest-cni-342599: exit status 7 (82.376764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-342599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-342599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 22:01:26.942379  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/kubenet-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-342599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (21.6238507s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-342599 -n newest-cni-342599
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jt9s4" [617621c0-d7cd-405e-a305-86639cd719df] Running
E1002 22:01:32.175961  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/false-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004235231s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jt9s4" [617621c0-d7cd-405e-a305-86639cd719df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0046862s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-544473 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-544473 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-544473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-544473 --alsologtostderr -v=1: (1.183287592s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473: exit status 2 (527.988382ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473: exit status 2 (490.910379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-544473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-544473 --alsologtostderr -v=1: (1.016607835s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
E1002 22:01:44.733679  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/bridge-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544473 -n default-k8s-diff-port-544473
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-342599 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-342599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-342599 --alsologtostderr -v=1: (1.37347257s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-342599 -n newest-cni-342599
E1002 22:01:49.038869  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/auto-174412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-342599 -n newest-cni-342599: exit status 2 (813.490626ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-342599 -n newest-cni-342599
E1002 22:01:49.196661  703895 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-702037/.minikube/profiles/old-k8s-version-199211/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-342599 -n newest-cni-342599: exit status 2 (740.539788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-342599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-342599 -n newest-cni-342599
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-342599 -n newest-cni-342599
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.77s)

                                                
                                    

Test skip (26/346)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-039409 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-039409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-039409
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-174412 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-174412" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-174412

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-174412" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174412"

                                                
                                                
----------------------- debugLogs end: cilium-174412 [took: 5.211979555s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-174412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-174412
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-218804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-218804
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard