Test Report: Docker_Linux_docker_arm64 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (5/347)

Order failed test Duration
29 TestAddons/serial/Volcano 211.85
37 TestAddons/parallel/Ingress 492.87
41 TestAddons/parallel/CSI 381.62
44 TestAddons/parallel/LocalPath 345.93
91 TestFunctional/parallel/DashboardCmd 302.25
x
+
TestAddons/serial/Volcano (211.85s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 99.577661ms
addons_test.go:884: volcano-controller stabilized in 99.685084ms
addons_test.go:868: volcano-scheduler stabilized in 99.78289ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-r9ldz" [235c9bfe-c8c2-4846-a8d9-bd3061aae1b9] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00366941s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-27fgr" [1650ee47-1138-4ff9-bcaa-e69c7229ccb0] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004055641s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-bqlv2" [658d367d-777e-421a-bbac-743298312d06] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004096781s
addons_test.go:903: (dbg) Run:  kubectl --context addons-481299 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-481299 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-481299 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [211aca47-df9e-41d1-a403-e77c9d85646f] Pending
helpers_test.go:352: "test-job-nginx-0" [211aca47-df9e-41d1-a403-e77c9d85646f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:935: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
addons_test.go:935: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-18 12:06:09.986283799 +0000 UTC m=+398.109006399
addons_test.go:935: (dbg) Run:  kubectl --context addons-481299 describe po test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) kubectl --context addons-481299 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             addons-481299/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:03:10 +0000
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-1c2079c0-0a32-4159-8365-27ae59d0db07
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               10.244.0.27
IPs:
IP:           10.244.0.27
Controlled By:  Job/test-job
Containers:
nginx:
Container ID:  
Image:         nginx:latest
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
10m
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pvzzm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pvzzm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From     Message
----     ------     ----                  ----     -------
Normal   Scheduled  3m                    volcano  Successfully assigned my-volcano/test-job-nginx-0 to addons-481299
Warning  Failed     88s (x2 over 2m44s)   kubelet  Failed to pull image "nginx:latest": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    15s (x11 over 2m58s)  kubelet  Back-off pulling image "nginx:latest"
Warning  Failed     15s (x11 over 2m58s)  kubelet  Error: ImagePullBackOff
Normal   Pulling    3s (x5 over 2m59s)    kubelet  Pulling image "nginx:latest"
Warning  Failed     3s (x3 over 2m59s)    kubelet  Failed to pull image "nginx:latest": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3s (x5 over 2m59s)    kubelet  Error: ErrImagePull
addons_test.go:935: (dbg) Run:  kubectl --context addons-481299 logs test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) Non-zero exit: kubectl --context addons-481299 logs test-job-nginx-0 -n my-volcano: exit status 1 (111.392877ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "test-job-nginx-0" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:935: kubectl --context addons-481299 logs test-job-nginx-0 -n my-volcano: exit status 1
addons_test.go:936: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-481299
helpers_test.go:243: (dbg) docker inspect addons-481299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	        "Created": "2025-10-18T12:00:12.386585691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:00:12.450161174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hosts",
	        "LogPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8-json.log",
	        "Name": "/addons-481299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-481299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-481299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	                "LowerDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc-init/diff:/var/lib/docker/overlay2/6da29ec63a7e3e82b22991cef36ed721f46b0e2f648634b12d0ad6b7225284d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-481299",
	                "Source": "/var/lib/docker/volumes/addons-481299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-481299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-481299",
	                "name.minikube.sigs.k8s.io": "addons-481299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f570c3a0d59e39fd3f8ba76488f6d4555992d509e8801ed716c6c97159646bf",
	            "SandboxKey": "/var/run/docker/netns/9f570c3a0d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-481299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:8f:15:9b:c7:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb7b19c2893329488850783d25e49e7f60d5e1cf75347f698d754ce5d4cf2448",
	                    "EndpointID": "51c67f4b258ce43a4b9996e94609bda24fc6b23ad970edc2979b013dbc0a113d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-481299",
	                        "7812e8ac5e4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-481299 -n addons-481299
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 logs -n 25: (1.3724003s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p download-docker-419204 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p download-docker-419204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p binary-mirror-415208 --alsologtostderr --binary-mirror http://127.0.0.1:32775 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p binary-mirror-415208                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ addons  │ enable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ addons  │ disable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ start   │ -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 12:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:44.391839  838384 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:44.391981  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.391993  838384 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:44.391997  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.392266  838384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 11:59:44.392787  838384 out.go:368] Setting JSON to false
	I1018 11:59:44.393655  838384 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13335,"bootTime":1760775450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:44.393734  838384 start.go:141] virtualization:  
	I1018 11:59:44.397036  838384 out.go:179] * [addons-481299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 11:59:44.400845  838384 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:59:44.400912  838384 notify.go:220] Checking for updates...
	I1018 11:59:44.406563  838384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:44.409444  838384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:44.412417  838384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:44.415444  838384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 11:59:44.418427  838384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:59:44.421615  838384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:44.443038  838384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:44.443164  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.504142  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.494831106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.504254  838384 docker.go:318] overlay module found
	I1018 11:59:44.507452  838384 out.go:179] * Using the docker driver based on user configuration
	I1018 11:59:44.510245  838384 start.go:305] selected driver: docker
	I1018 11:59:44.510268  838384 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:44.510293  838384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:59:44.511015  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.564605  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.555845429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.564766  838384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:44.564999  838384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:59:44.567965  838384 out.go:179] * Using Docker driver with root privileges
	I1018 11:59:44.570848  838384 cni.go:84] Creating CNI manager for ""
	I1018 11:59:44.570930  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:59:44.570945  838384 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:59:44.571024  838384 start.go:349] cluster config:
	{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1018 11:59:44.574127  838384 out.go:179] * Starting "addons-481299" primary control-plane node in "addons-481299" cluster
	I1018 11:59:44.576970  838384 cache.go:123] Beginning downloading kic base image for docker with docker
	I1018 11:59:44.579989  838384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:59:44.582852  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 11:59:44.582920  838384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1018 11:59:44.582938  838384 cache.go:58] Caching tarball of preloaded images
	I1018 11:59:44.582927  838384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:59:44.583037  838384 preload.go:233] Found /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 11:59:44.583047  838384 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 11:59:44.583380  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 11:59:44.583404  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json: {Name:mkb94853570c970a5649e27444200bcb46f3c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:59:44.598730  838384 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:44.598847  838384 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:59:44.598874  838384 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:59:44.598880  838384 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:59:44.598888  838384 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:59:44.598893  838384 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:00:04.269349  838384 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:00:04.269387  838384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:00:04.269433  838384 start.go:360] acquireMachinesLock for addons-481299: {Name:mkd3a8e0953bb87092640ae884afd157e18dcff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:00:04.270184  838384 start.go:364] duration metric: took 728.139µs to acquireMachinesLock for "addons-481299"
	I1018 12:00:04.270232  838384 start.go:93] Provisioning new machine with config: &{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:04.270339  838384 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:00:04.273733  838384 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:00:04.273967  838384 start.go:159] libmachine.API.Create for "addons-481299" (driver="docker")
	I1018 12:00:04.274006  838384 client.go:168] LocalClient.Create starting
	I1018 12:00:04.274125  838384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem
	I1018 12:00:04.457164  838384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem
	I1018 12:00:06.171545  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:00:06.188036  838384 cli_runner.go:211] docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:00:06.188147  838384 network_create.go:284] running [docker network inspect addons-481299] to gather additional debugging logs...
	I1018 12:00:06.188169  838384 cli_runner.go:164] Run: docker network inspect addons-481299
	W1018 12:00:06.204439  838384 cli_runner.go:211] docker network inspect addons-481299 returned with exit code 1
	I1018 12:00:06.204474  838384 network_create.go:287] error running [docker network inspect addons-481299]: docker network inspect addons-481299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-481299 not found
	I1018 12:00:06.204503  838384 network_create.go:289] output of [docker network inspect addons-481299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-481299 not found
	
	** /stderr **
	I1018 12:00:06.204601  838384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:06.221696  838384 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74f90}
	I1018 12:00:06.221747  838384 network_create.go:124] attempt to create docker network addons-481299 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:00:06.221806  838384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-481299 addons-481299
	I1018 12:00:06.276048  838384 network_create.go:108] docker network addons-481299 192.168.49.0/24 created
	I1018 12:00:06.276085  838384 kic.go:121] calculated static IP "192.168.49.2" for the "addons-481299" container
	I1018 12:00:06.276177  838384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:00:06.289979  838384 cli_runner.go:164] Run: docker volume create addons-481299 --label name.minikube.sigs.k8s.io=addons-481299 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:00:06.312048  838384 oci.go:103] Successfully created a docker volume addons-481299
	I1018 12:00:06.312139  838384 cli_runner.go:164] Run: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:00:08.424976  838384 cli_runner.go:217] Completed: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112781921s)
	I1018 12:00:08.425013  838384 oci.go:107] Successfully prepared a docker volume addons-481299
	I1018 12:00:08.425034  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:08.425054  838384 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:00:08.425128  838384 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:00:12.313130  838384 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.887944574s)
	I1018 12:00:12.313163  838384 kic.go:203] duration metric: took 3.888105963s to extract preloaded images to volume ...
	W1018 12:00:12.313309  838384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:00:12.313422  838384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:00:12.370823  838384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-481299 --name addons-481299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-481299 --network addons-481299 --ip 192.168.49.2 --volume addons-481299:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:00:12.674830  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Running}}
	I1018 12:00:12.693179  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:12.716439  838384 cli_runner.go:164] Run: docker exec addons-481299 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:00:12.770464  838384 oci.go:144] the created container "addons-481299" has a running status.
	I1018 12:00:12.770498  838384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa...
	I1018 12:00:13.113159  838384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:00:13.151031  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.187260  838384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:00:13.187279  838384 kic_runner.go:114] Args: [docker exec --privileged addons-481299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:00:13.265766  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.292626  838384 machine.go:93] provisionDockerMachine start ...
	I1018 12:00:13.292734  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:13.317367  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:13.317685  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:13.317701  838384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:00:13.319021  838384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:00:16.468726  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.468765  838384 ubuntu.go:182] provisioning hostname "addons-481299"
	I1018 12:00:16.468854  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.485891  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.486196  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.486212  838384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-481299 && echo "addons-481299" | sudo tee /etc/hostname
	I1018 12:00:16.646538  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.646612  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.664006  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.664309  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.664343  838384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-481299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-481299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-481299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:00:16.809369  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:00:16.809393  838384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-835760/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-835760/.minikube}
	I1018 12:00:16.809411  838384 ubuntu.go:190] setting up certificates
	I1018 12:00:16.809435  838384 provision.go:84] configureAuth start
	I1018 12:00:16.809492  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:16.826275  838384 provision.go:143] copyHostCerts
	I1018 12:00:16.826387  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/ca.pem (1078 bytes)
	I1018 12:00:16.826516  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/cert.pem (1123 bytes)
	I1018 12:00:16.826581  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/key.pem (1679 bytes)
	I1018 12:00:16.826641  838384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem org=jenkins.addons-481299 san=[127.0.0.1 192.168.49.2 addons-481299 localhost minikube]
	I1018 12:00:17.507919  838384 provision.go:177] copyRemoteCerts
	I1018 12:00:17.507999  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:00:17.508042  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.525367  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:17.628972  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:00:17.646764  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:00:17.664067  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:00:17.681849  838384 provision.go:87] duration metric: took 872.389064ms to configureAuth
	I1018 12:00:17.681878  838384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:00:17.682093  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:17.682153  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.699224  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.699534  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.699549  838384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1018 12:00:17.845632  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1018 12:00:17.845657  838384 ubuntu.go:71] root file system type: overlay
	I1018 12:00:17.845771  838384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1018 12:00:17.845845  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.863079  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.863381  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.863474  838384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1018 12:00:18.022880  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1018 12:00:18.022967  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.040408  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:18.040739  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:18.040762  838384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1018 12:00:18.977721  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:17:10.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-18 12:00:18.020798373 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1018 12:00:18.977763  838384 machine.go:96] duration metric: took 5.685111923s to provisionDockerMachine
	I1018 12:00:18.977774  838384 client.go:171] duration metric: took 14.703758303s to LocalClient.Create
	I1018 12:00:18.977788  838384 start.go:167] duration metric: took 14.703822977s to libmachine.API.Create "addons-481299"
	I1018 12:00:18.977798  838384 start.go:293] postStartSetup for "addons-481299" (driver="docker")
	I1018 12:00:18.977811  838384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:00:18.977881  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:00:18.977924  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.994172  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.097162  838384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:00:19.100393  838384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:00:19.100424  838384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:00:19.100436  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/addons for local assets ...
	I1018 12:00:19.100511  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/files for local assets ...
	I1018 12:00:19.100544  838384 start.go:296] duration metric: took 122.736875ms for postStartSetup
	I1018 12:00:19.100868  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.116943  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 12:00:19.117256  838384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:00:19.117306  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.133849  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.234008  838384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:00:19.238363  838384 start.go:128] duration metric: took 14.968007424s to createHost
	I1018 12:00:19.238389  838384 start.go:83] releasing machines lock for "addons-481299", held for 14.968185396s
	I1018 12:00:19.238463  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.254388  838384 ssh_runner.go:195] Run: cat /version.json
	I1018 12:00:19.254441  838384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:00:19.254505  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.254443  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.276799  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.279431  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.472177  838384 ssh_runner.go:195] Run: systemctl --version
	I1018 12:00:19.478376  838384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:00:19.482387  838384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:00:19.482484  838384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:00:19.508985  838384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:00:19.509011  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.509045  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.509137  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.523297  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:00:19.531952  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:00:19.540691  838384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:00:19.540763  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:00:19.549594  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.558008  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:00:19.566758  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.576635  838384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:00:19.584880  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:00:19.593685  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:00:19.602453  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:00:19.611252  838384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:00:19.618753  838384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:00:19.626251  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:19.751139  838384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:00:19.836762  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.836852  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.836930  838384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1018 12:00:19.852094  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.864977  838384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:00:19.898620  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.911221  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:00:19.924203  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.938348  838384 ssh_runner.go:195] Run: which cri-dockerd
	I1018 12:00:19.942139  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1018 12:00:19.949360  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1018 12:00:19.962074  838384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1018 12:00:20.083838  838384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1018 12:00:20.212913  838384 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1018 12:00:20.213052  838384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1018 12:00:20.227027  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:00:20.239819  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:20.356987  838384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:00:20.730788  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:00:20.745019  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1018 12:00:20.760946  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:20.776122  838384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1018 12:00:20.908216  838384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1018 12:00:21.026034  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.144947  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1018 12:00:21.160912  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1018 12:00:21.173334  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.292303  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1018 12:00:21.361374  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:21.374804  838384 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1018 12:00:21.374947  838384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1018 12:00:21.379225  838384 start.go:563] Will wait 60s for crictl version
	I1018 12:00:21.379335  838384 ssh_runner.go:195] Run: which crictl
	I1018 12:00:21.382801  838384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:00:21.411496  838384 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1018 12:00:21.411621  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.433381  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.460386  838384 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1018 12:00:21.460510  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:21.476212  838384 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:00:21.480421  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.490334  838384 kubeadm.go:883] updating cluster {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:00:21.490454  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:21.490519  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.509291  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.509313  838384 docker.go:621] Images already preloaded, skipping extraction
	I1018 12:00:21.509377  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.528526  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.528551  838384 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:00:21.528561  838384 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1018 12:00:21.528654  838384 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-481299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:00:21.528727  838384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:00:21.581422  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:21.581453  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:21.581477  838384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:00:21.581497  838384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-481299 NodeName:addons-481299 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:00:21.581620  838384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-481299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:00:21.581689  838384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:00:21.589319  838384 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:00:21.589438  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:00:21.597066  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 12:00:21.609634  838384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:00:21.622567  838384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:00:21.635067  838384 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:00:21.638568  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.647824  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.763836  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:21.780526  838384 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299 for IP: 192.168.49.2
	I1018 12:00:21.780549  838384 certs.go:195] generating shared ca certs ...
	I1018 12:00:21.780565  838384 certs.go:227] acquiring lock for ca certs: {Name:mk64b6deab019965f0a71e524905de5b1147e674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:21.780707  838384 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key
	I1018 12:00:22.104055  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt ...
	I1018 12:00:22.104088  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt: {Name:mk219bca6d4f923342f3157b83b20704e67715ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.104280  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key ...
	I1018 12:00:22.104292  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key: {Name:mk5866d61f42a017eec142fb2b809b20ce393fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.105023  838384 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key
	I1018 12:00:22.483966  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt ...
	I1018 12:00:22.483996  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt: {Name:mk2555147315d465bb6cbbe0ada0da38d0982383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484782  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key ...
	I1018 12:00:22.484802  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key: {Name:mke70c887973fdba4f391cbb188fbbcbcc67feb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484942  838384 certs.go:257] generating profile certs ...
	I1018 12:00:22.485032  838384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key
	I1018 12:00:22.485075  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt with IP's: []
	I1018 12:00:22.766617  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt ...
	I1018 12:00:22.766652  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: {Name:mk71fca78b75ad1dca519de9fb0da28787da9922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.767459  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key ...
	I1018 12:00:22.767475  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key: {Name:mkfc16398f479a2713696abea6afcf11b3f1f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.768111  838384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943
	I1018 12:00:22.768134  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:00:23.220784  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 ...
	I1018 12:00:23.220818  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943: {Name:mk6c4c501d6b0340a440642541437e4d17340f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221595  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 ...
	I1018 12:00:23.221617  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943: {Name:mkcf265b1908bbb8a1e1337f558863cea9c9422c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221706  838384 certs.go:382] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt
	I1018 12:00:23.221802  838384 certs.go:386] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key
	I1018 12:00:23.221883  838384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key
	I1018 12:00:23.221908  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt with IP's: []
	I1018 12:00:24.966646  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt ...
	I1018 12:00:24.966676  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt: {Name:mkee15b7b76ca75c0e94c93d9f26d926d62d1534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.967526  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key ...
	I1018 12:00:24.967544  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key: {Name:mk9dc72444d85c193fa5f2a33fb60909bdb001f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.968384  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:00:24.968428  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:00:24.968456  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:00:24.968483  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem (1679 bytes)
	I1018 12:00:24.969022  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:00:24.988248  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:00:25.007149  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:00:25.033918  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:00:25.066985  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:00:25.087585  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:00:25.108125  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:00:25.128839  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:00:25.148548  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:00:25.168256  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:00:25.184446  838384 ssh_runner.go:195] Run: openssl version
	I1018 12:00:25.191471  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:00:25.200726  838384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205014  838384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:00 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205128  838384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.246539  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:00:25.255568  838384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:00:25.259344  838384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:00:25.259420  838384 kubeadm.go:400] StartCluster: {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:25.259592  838384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:00:25.275896  838384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:00:25.283941  838384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:00:25.291784  838384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:00:25.291878  838384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:00:25.299853  838384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:00:25.299873  838384 kubeadm.go:157] found existing configuration files:
	
	I1018 12:00:25.299925  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:00:25.307671  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:00:25.307740  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:00:25.315009  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:00:25.322614  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:00:25.322749  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:00:25.330039  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.337975  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:00:25.338088  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.346012  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:00:25.353964  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:00:25.354066  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:00:25.361694  838384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:00:25.404873  838384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:00:25.405115  838384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:00:25.430950  838384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:00:25.431100  838384 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:00:25.431181  838384 kubeadm.go:318] OS: Linux
	I1018 12:00:25.431259  838384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:00:25.431348  838384 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:00:25.431436  838384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:00:25.431543  838384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:00:25.431629  838384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:00:25.431723  838384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:00:25.431825  838384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:00:25.431922  838384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:00:25.432024  838384 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:00:25.495079  838384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:00:25.495226  838384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:00:25.495348  838384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:00:25.521654  838384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:00:25.528010  838384 out.go:252]   - Generating certificates and keys ...
	I1018 12:00:25.528156  838384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:00:25.528258  838384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:00:26.879731  838384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:00:26.999718  838384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:00:27.674514  838384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:00:27.895095  838384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:00:28.404519  838384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:00:28.404866  838384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:28.991527  838384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:00:28.991878  838384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:29.164485  838384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:00:29.695021  838384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:00:30.175093  838384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:00:30.176279  838384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:00:30.242007  838384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:00:30.627655  838384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:00:31.915030  838384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:00:32.022022  838384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:00:32.099687  838384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:00:32.100473  838384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:00:32.105382  838384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:00:32.109132  838384 out.go:252]   - Booting up control plane ...
	I1018 12:00:32.109259  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:00:32.109346  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:00:32.109424  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:00:32.126476  838384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:00:32.126811  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:00:32.134355  838384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:00:32.134771  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:00:32.135020  838384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:00:32.269048  838384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:00:32.269176  838384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:00:34.270707  838384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001804064s
	I1018 12:00:34.274174  838384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:00:34.274278  838384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:00:34.274378  838384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:00:34.274465  838384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:00:39.829992  838384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.555227625s
	I1018 12:00:40.310124  838384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.035887374s
	I1018 12:00:41.776311  838384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501922731s
	I1018 12:00:41.797119  838384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:00:41.811164  838384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:00:41.829757  838384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:00:41.829974  838384 kubeadm.go:318] [mark-control-plane] Marking the node addons-481299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:00:41.842508  838384 kubeadm.go:318] [bootstrap-token] Using token: 28o1tp.74tu7k9l138h9i1r
	I1018 12:00:41.845472  838384 out.go:252]   - Configuring RBAC rules ...
	I1018 12:00:41.845619  838384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:00:41.851305  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:00:41.861731  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:00:41.865572  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:00:41.869978  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:00:41.874041  838384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:00:42.183860  838384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:00:42.609144  838384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:00:43.183159  838384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:00:43.184156  838384 kubeadm.go:318] 
	I1018 12:00:43.184228  838384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:00:43.184234  838384 kubeadm.go:318] 
	I1018 12:00:43.184315  838384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:00:43.184319  838384 kubeadm.go:318] 
	I1018 12:00:43.184351  838384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:00:43.184413  838384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:00:43.184465  838384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:00:43.184469  838384 kubeadm.go:318] 
	I1018 12:00:43.184525  838384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:00:43.184530  838384 kubeadm.go:318] 
	I1018 12:00:43.184581  838384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:00:43.184586  838384 kubeadm.go:318] 
	I1018 12:00:43.184640  838384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:00:43.184718  838384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:00:43.184789  838384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:00:43.184808  838384 kubeadm.go:318] 
	I1018 12:00:43.184897  838384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:00:43.184977  838384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:00:43.184982  838384 kubeadm.go:318] 
	I1018 12:00:43.185069  838384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185176  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a \
	I1018 12:00:43.185221  838384 kubeadm.go:318] 	--control-plane 
	I1018 12:00:43.185227  838384 kubeadm.go:318] 
	I1018 12:00:43.185315  838384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:00:43.185320  838384 kubeadm.go:318] 
	I1018 12:00:43.185405  838384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185510  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a 
	I1018 12:00:43.189253  838384 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:00:43.189490  838384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:00:43.189602  838384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:00:43.189623  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:43.189644  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:43.192807  838384 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 12:00:43.195736  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 12:00:43.204431  838384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 12:00:43.217468  838384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:00:43.217531  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.217601  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-481299 minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-481299 minikube.k8s.io/primary=true
	I1018 12:00:43.380129  838384 ops.go:34] apiserver oom_adj: -16
	I1018 12:00:43.380250  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.880662  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.380452  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.880320  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.381247  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.881072  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.380646  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.880456  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.380377  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.880798  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:48.013839  838384 kubeadm.go:1113] duration metric: took 4.796362824s to wait for elevateKubeSystemPrivileges
	I1018 12:00:48.013872  838384 kubeadm.go:402] duration metric: took 22.754456109s to StartCluster
	I1018 12:00:48.013892  838384 settings.go:142] acquiring lock: {Name:mk03f4782566e111488ff6c3364f55783ae35e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014045  838384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:00:48.014571  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/kubeconfig: {Name:mk9c5565e95c70d1e94713bb6739a4c61bf95b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014840  838384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:48.014969  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:00:48.015286  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.015326  838384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:00:48.015411  838384 addons.go:69] Setting yakd=true in profile "addons-481299"
	I1018 12:00:48.015430  838384 addons.go:238] Setting addon yakd=true in "addons-481299"
	I1018 12:00:48.015461  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.016044  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.016527  838384 addons.go:69] Setting metrics-server=true in profile "addons-481299"
	I1018 12:00:48.016553  838384 addons.go:238] Setting addon metrics-server=true in "addons-481299"
	I1018 12:00:48.016589  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.017051  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.017907  838384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.019860  838384 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-481299"
	I1018 12:00:48.019957  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.021475  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019545  838384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.022344  838384 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-481299"
	I1018 12:00:48.022416  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.019668  838384 addons.go:69] Setting cloud-spanner=true in profile "addons-481299"
	I1018 12:00:48.024324  838384 addons.go:238] Setting addon cloud-spanner=true in "addons-481299"
	I1018 12:00:48.024374  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.024861  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.025713  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019679  838384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-481299"
	I1018 12:00:48.030476  838384 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:00:48.030552  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.031113  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019684  838384 addons.go:69] Setting default-storageclass=true in profile "addons-481299"
	I1018 12:00:48.041523  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-481299"
	I1018 12:00:48.042108  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019687  838384 addons.go:69] Setting gcp-auth=true in profile "addons-481299"
	I1018 12:00:48.050391  838384 mustload.go:65] Loading cluster: addons-481299
	I1018 12:00:48.050608  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.050867  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019691  838384 addons.go:69] Setting ingress=true in profile "addons-481299"
	I1018 12:00:48.072186  838384 addons.go:238] Setting addon ingress=true in "addons-481299"
	I1018 12:00:48.072238  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.072812  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.104085  838384 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:00:48.019694  838384 addons.go:69] Setting ingress-dns=true in profile "addons-481299"
	I1018 12:00:48.105418  838384 addons.go:238] Setting addon ingress-dns=true in "addons-481299"
	I1018 12:00:48.105466  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.105984  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.112477  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:00:48.112591  838384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:00:48.112702  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019703  838384 addons.go:69] Setting inspektor-gadget=true in profile "addons-481299"
	I1018 12:00:48.128479  838384 addons.go:238] Setting addon inspektor-gadget=true in "addons-481299"
	I1018 12:00:48.128547  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.129027  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.137976  838384 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:00:48.142238  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:00:48.142360  838384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:00:48.142534  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019834  838384 out.go:179] * Verifying Kubernetes components...
	I1018 12:00:48.020098  838384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-481299"
	I1018 12:00:48.152243  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-481299"
	I1018 12:00:48.152617  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020105  838384 addons.go:69] Setting registry=true in profile "addons-481299"
	I1018 12:00:48.157306  838384 addons.go:238] Setting addon registry=true in "addons-481299"
	I1018 12:00:48.157349  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.157826  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020108  838384 addons.go:69] Setting registry-creds=true in profile "addons-481299"
	I1018 12:00:48.184275  838384 addons.go:238] Setting addon registry-creds=true in "addons-481299"
	I1018 12:00:48.184329  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.184843  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.201404  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:48.237763  838384 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:00:48.240788  838384 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:48.240812  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:00:48.240888  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020111  838384 addons.go:69] Setting storage-provisioner=true in profile "addons-481299"
	I1018 12:00:48.255281  838384 addons.go:238] Setting addon storage-provisioner=true in "addons-481299"
	I1018 12:00:48.255324  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.255806  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.263489  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.267247  838384 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:00:48.020115  838384 addons.go:69] Setting volumesnapshots=true in profile "addons-481299"
	I1018 12:00:48.287811  838384 addons.go:238] Setting addon volumesnapshots=true in "addons-481299"
	I1018 12:00:48.287854  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.288331  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.297150  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:00:48.305304  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:00:48.307825  838384 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:48.307845  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:00:48.307915  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020121  838384 addons.go:69] Setting volcano=true in profile "addons-481299"
	I1018 12:00:48.309165  838384 addons.go:238] Setting addon volcano=true in "addons-481299"
	I1018 12:00:48.309239  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.309684  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.323250  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.327844  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:00:48.334482  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:00:48.338284  838384 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:00:48.340947  838384 addons.go:238] Setting addon default-storageclass=true in "addons-481299"
	I1018 12:00:48.340997  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.341575  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.353583  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.359214  838384 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:48.359297  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:00:48.359408  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.368078  838384 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:00:48.390679  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:00:48.394064  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:00:48.396983  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:00:48.397244  838384 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:00:48.406053  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:00:48.407309  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.409466  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:48.409487  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:00:48.409583  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.411154  838384 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:48.411801  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:00:48.411868  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.436870  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:48.436945  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:00:48.437058  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.470112  838384 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:00:48.476747  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:00:48.411213  838384 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:00:48.483079  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.485480  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:00:48.485501  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:00:48.485576  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.486853  838384 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-481299"
	I1018 12:00:48.486939  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.487597  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.517521  838384 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:00:48.517545  838384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:00:48.517619  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.537436  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.538328  838384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:48.538343  838384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:00:48.538398  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.553217  838384 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:00:48.554968  838384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:00:48.556537  838384 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:00:48.556742  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:00:48.556825  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.570246  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:48.570267  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:00:48.570329  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.589916  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.591402  838384 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:00:48.591565  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:00:48.600425  838384 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:00:48.600484  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:00:48.600494  838384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:00:48.600575  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.621433  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.622509  838384 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:00:48.640797  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.660645  838384 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:48.660673  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:00:48.660741  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.709393  838384 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:00:48.713361  838384 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:00:48.715118  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.737398  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.754446  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.759608  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.767028  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:48.767051  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:00:48.767120  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.774154  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.778151  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.785350  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.790983  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.816526  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.817012  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	W1018 12:00:48.829777  838384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:00:48.829816  838384 retry.go:31] will retry after 218.102369ms: ssh: handshake failed: EOF
	I1018 12:00:48.929211  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:00:48.930600  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:49.692319  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:00:49.692342  838384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:00:49.778672  838384 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:00:49.778699  838384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:00:49.799026  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:00:49.799099  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:00:49.810733  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:49.821114  838384 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:49.821260  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:00:49.834872  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:49.938418  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:49.941990  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:49.960460  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:00:49.960486  838384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:00:49.976966  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:00:49.976992  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:00:49.982393  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:50.005856  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:50.006119  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:00:50.006139  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:00:50.030431  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:50.033442  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:50.078822  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:00:50.078897  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:00:50.223993  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:50.246008  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:00:50.246050  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:00:50.280883  838384 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.280909  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:00:50.285103  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:50.292971  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:00:50.292997  838384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:00:50.305000  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:00:50.305028  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:00:50.307406  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:00:50.307430  838384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:00:50.427676  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:00:50.427702  838384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:00:50.443185  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.443211  838384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:00:50.467032  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:50.523115  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.551023  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.551047  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:00:50.599104  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:00:50.599134  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:00:50.673841  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.676793  838384 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:50.676869  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:00:50.686555  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.800152  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:00:50.800216  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:00:50.933424  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:51.038226  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:00:51.038306  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:00:51.103206  838384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.173952226s)
	I1018 12:00:51.103234  838384 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:00:51.104274  838384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.173645704s)
	I1018 12:00:51.104963  838384 node_ready.go:35] waiting up to 6m0s for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108529  838384 node_ready.go:49] node "addons-481299" is "Ready"
	I1018 12:00:51.108556  838384 node_ready.go:38] duration metric: took 3.569435ms for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108574  838384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:00:51.108657  838384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:00:51.393745  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.582925006s)
	I1018 12:00:51.555496  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:00:51.555567  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:00:51.607401  838384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-481299" context rescaled to 1 replicas
	I1018 12:00:52.004594  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:00:52.004679  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:00:52.455938  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.620977988s)
	I1018 12:00:52.518962  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:00:52.519035  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:00:52.943868  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:00:52.943948  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:00:53.278688  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:53.278715  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:00:53.606823  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:54.485982  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547484935s)
	I1018 12:00:55.375228  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.392799926s)
	I1018 12:00:55.381718  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.439683188s)
	I1018 12:00:55.687452  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:00:55.687560  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:55.715052  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:56.874895  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:00:57.069070  838384 addons.go:238] Setting addon gcp-auth=true in "addons-481299"
	I1018 12:00:57.069155  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:57.069643  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:57.097607  838384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:00:57.097663  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:57.125594  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:57.466799  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.460903197s)
	I1018 12:00:57.466833  838384 addons.go:479] Verifying addon ingress=true in "addons-481299"
	I1018 12:00:57.466986  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436528408s)
	I1018 12:00:57.467006  838384 addons.go:479] Verifying addon registry=true in "addons-481299"
	I1018 12:00:57.467280  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.433810994s)
	I1018 12:00:57.467337  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.243320649s)
	I1018 12:00:57.467373  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.182252425s)
	I1018 12:00:57.471733  838384 out.go:179] * Verifying registry addon...
	I1018 12:00:57.471837  838384 out.go:179] * Verifying ingress addon...
	I1018 12:00:57.475972  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:00:57.476891  838384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:00:57.480952  838384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:00:57.480978  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:57.481343  838384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:00:57.481364  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987158  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987606  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505049  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505238  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102780  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102881  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.537809  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.538274  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.983982  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.989640  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:00.494679  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:00.494785  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.150283  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.151501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.410549  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.94347934s)
	I1018 12:01:01.410647  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.88750974s)
	W1018 12:01:01.410663  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410677  838384 retry.go:31] will retry after 201.547914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410735  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.736813896s)
	I1018 12:01:01.410746  838384 addons.go:479] Verifying addon metrics-server=true in "addons-481299"
	I1018 12:01:01.410781  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.724138069s)
	I1018 12:01:01.410951  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.477459185s)
	W1018 12:01:01.415024  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.415053  838384 retry.go:31] will retry after 281.160141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.410970  838384 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.302296364s)
	I1018 12:01:01.415078  838384 api_server.go:72] duration metric: took 13.40020826s to wait for apiserver process to appear ...
	I1018 12:01:01.415084  838384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:01:01.415099  838384 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:01:01.411139  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.804213387s)
	I1018 12:01:01.415446  838384 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:01:01.411161  838384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.313535397s)
	I1018 12:01:01.419793  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:01.419828  838384 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-481299 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:01:01.419793  838384 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:01:01.425137  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:01:01.428507  838384 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:01:01.431298  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:01:01.431327  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:01:01.501730  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:01:01.501758  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:01:01.508482  838384 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:01:01.513158  838384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:01:01.513201  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:01.539110  838384 api_server.go:141] control plane version: v1.34.1
	I1018 12:01:01.539156  838384 api_server.go:131] duration metric: took 124.06524ms to wait for apiserver health ...
	I1018 12:01:01.539166  838384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:01:01.613319  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:01.627688  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.627710  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:01:01.665079  838384 system_pods.go:59] 18 kube-system pods found
	I1018 12:01:01.665176  838384 system_pods.go:61] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.665300  838384 system_pods.go:61] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.665325  838384 system_pods.go:61] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.665360  838384 system_pods.go:61] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.665384  838384 system_pods.go:61] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.665405  838384 system_pods.go:61] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.665426  838384 system_pods.go:61] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.665459  838384 system_pods.go:61] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.665480  838384 system_pods.go:61] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.665500  838384 system_pods.go:61] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.665522  838384 system_pods.go:61] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.665545  838384 system_pods.go:61] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.665567  838384 system_pods.go:61] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.665592  838384 system_pods.go:61] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.665624  838384 system_pods.go:61] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.665649  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665673  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665696  838384 system_pods.go:61] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.665719  838384 system_pods.go:74] duration metric: took 126.545392ms to wait for pod list to return data ...
	I1018 12:01:01.665740  838384 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:01:01.666428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.666860  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.697178  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:01.740057  838384 default_sa.go:45] found service account: "default"
	I1018 12:01:01.740128  838384 default_sa.go:55] duration metric: took 74.366134ms for default service account to be created ...
	I1018 12:01:01.740153  838384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:01:01.750661  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.822559  838384 system_pods.go:86] 18 kube-system pods found
	I1018 12:01:01.822661  838384 system_pods.go:89] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.822687  838384 system_pods.go:89] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.822728  838384 system_pods.go:89] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.822757  838384 system_pods.go:89] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.822780  838384 system_pods.go:89] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.822805  838384 system_pods.go:89] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.822838  838384 system_pods.go:89] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.822870  838384 system_pods.go:89] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.822891  838384 system_pods.go:89] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.822912  838384 system_pods.go:89] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.822945  838384 system_pods.go:89] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.822968  838384 system_pods.go:89] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.822993  838384 system_pods.go:89] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.823021  838384 system_pods.go:89] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.823053  838384 system_pods.go:89] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.823081  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823104  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823125  838384 system_pods.go:89] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.823163  838384 system_pods.go:126] duration metric: took 82.986663ms to wait for k8s-apps to be running ...
	I1018 12:01:01.823190  838384 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:01:01.823297  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:01.937035  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.020374  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.020832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.430107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.481721  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.482135  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.929149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.979391  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.981564  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.429257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.480836  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.481398  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:03.928420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.981042  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.981494  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.429897  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.480820  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.482111  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:04.534347  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.920939875s)
	W1018 12:01:04.534529  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534551  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.783796022s)
	I1018 12:01:04.534584  838384 retry.go:31] will retry after 344.775954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534505  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.837082754s)
	I1018 12:01:04.534652  838384 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.71132558s)
	I1018 12:01:04.534680  838384 system_svc.go:56] duration metric: took 2.711486896s WaitForService to wait for kubelet
	I1018 12:01:04.534699  838384 kubeadm.go:586] duration metric: took 16.519828999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:01:04.534730  838384 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:01:04.537903  838384 addons.go:479] Verifying addon gcp-auth=true in "addons-481299"
	I1018 12:01:04.538525  838384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:01:04.538550  838384 node_conditions.go:123] node cpu capacity is 2
	I1018 12:01:04.538562  838384 node_conditions.go:105] duration metric: took 3.804704ms to run NodePressure ...
	I1018 12:01:04.538574  838384 start.go:241] waiting for startup goroutines ...
	I1018 12:01:04.541908  838384 out.go:179] * Verifying gcp-auth addon...
	I1018 12:01:04.545758  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:01:04.549219  838384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:01:04.549243  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:04.879563  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:04.929625  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.979948  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.980174  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.049663  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.429694  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:05.481708  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.482094  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:05.550159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.929062  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.026330  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146729751s)
	W1018 12:01:06.026368  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.026387  838384 retry.go:31] will retry after 530.445094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.029966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.030602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.049648  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.428850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.480186  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.480559  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.549270  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.557566  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:06.928852  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.981051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.981269  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.049233  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.428928  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.480325  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.480920  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:07.549301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.666703  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109094231s)
	W1018 12:01:07.666792  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.666842  838384 retry.go:31] will retry after 603.526174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.928884  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.980605  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.981081  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.049925  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.271551  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:08.428903  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.480126  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.481567  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.550087  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.929709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.982028  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.982437  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.049266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.386734  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.115102938s)
	W1018 12:01:09.386771  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.386790  838384 retry.go:31] will retry after 1.698280239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.429436  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.480507  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:09.480673  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.548565  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.929443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.981023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.981260  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.050256  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.428811  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.480068  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.480432  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:10.549416  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.929547  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.980558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.981814  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.049021  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.085333  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:11.429636  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.480969  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.481107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.549095  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.929997  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.981838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.982268  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.050261  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.218914  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133541217s)
	W1018 12:01:12.218954  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.218972  838384 retry.go:31] will retry after 1.345701913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.429128  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.480634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.480946  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.548958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.929428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.980178  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.980839  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.048870  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.429927  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.478960  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:13.480243  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.550057  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.565398  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:13.929670  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.980749  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.981541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.050280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.430257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.481781  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.482216  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:14.549060  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.674833  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.10935166s)
	W1018 12:01:14.674911  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.674935  838384 retry.go:31] will retry after 4.015074409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.929000  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.980481  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.980936  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.049371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.428339  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.481378  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:15.549296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.928645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.979966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.980109  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.049992  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.429300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.481508  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.481893  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:16.549251  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.929096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.979808  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.981556  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.050187  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.428204  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.488687  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.490476  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.549747  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.930135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.979752  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.980568  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.050894  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.428555  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:18.481131  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:18.483023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.691077  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:18.933149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.030226  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.030879  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.049132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.430174  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.481455  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.481882  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.549279  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.871749  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180617611s)
	W1018 12:01:19.871793  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.871814  838384 retry.go:31] will retry after 2.515978912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.929529  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.980077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.980426  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.052024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.429464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.479838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.480286  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:20.548968  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.928596  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.981305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.981761  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.050329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.430266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.481117  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:21.482392  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.549618  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.930023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.980648  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.981714  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.051678  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.387962  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:22.428709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.480821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:22.481479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.549163  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.929296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.980516  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.982615  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.049456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.430314  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.483574  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.483954  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.548803  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.611192  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223187854s)
	W1018 12:01:23.611326  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.611398  838384 retry.go:31] will retry after 3.829328757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.930083  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.980905  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.983008  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.050301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.429404  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.480464  838384 kapi.go:107] duration metric: took 27.00449331s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:01:24.481254  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.549300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.933613  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.980927  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.050132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.428170  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.480118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.549464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.930020  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.980129  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.049622  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.429371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.480502  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.549703  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.929758  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.981438  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.049865  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.428814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.441124  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:27.480812  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.549252  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.928712  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.981084  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.049705  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.430621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.481300  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.550023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.635231  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194068908s)
	W1018 12:01:28.635269  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.635306  838384 retry.go:31] will retry after 12.348738913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.929164  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.980051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.049370  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.429546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.549608  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.928941  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.980182  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.053764  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.428514  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.480543  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.549832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.929633  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.980682  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.048503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.429926  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.480706  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.549798  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.981501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.050135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.430225  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.480064  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.549107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.930034  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.980601  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.050009  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.428665  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.481555  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.549517  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.930160  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.980721  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.049111  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.430075  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.480599  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.549824  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.928660  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.981106  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.049429  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.429715  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.480901  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.549527  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.928904  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.979975  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.049469  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.429388  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.480507  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.549383  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.928634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.980506  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.049878  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.428319  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.480864  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.549709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.933484  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.981299  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.050356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.432420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.531156  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.549592  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.929614  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.980821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.048908  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.432072  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:39.483680  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.549850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.930073  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.042288  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.051491  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.429568  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.529694  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.592500  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.930099  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.981977  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.984969  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:41.048958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.428951  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.480793  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:41.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.935106  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.980130  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.050058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.376984  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.391917505s)
	W1018 12:01:42.377022  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.377040  838384 retry.go:31] will retry after 9.460130047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.430039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.481400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.549280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.930471  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.981285  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.050674  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.429650  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.550049  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.928854  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.980322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.049881  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.429125  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.480353  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.551159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.928689  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.980964  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.084556  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.428634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.529995  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.548921  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.930616  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.981362  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.050003  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.429102  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.480267  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.549814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.928994  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.993938  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.049617  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.429329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:47.481657  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.550619  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.930499  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.032192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.054316  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.440479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.542668  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.550977  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.929346  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.980748  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.048945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.449438  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.498203  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.551135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.928541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.980400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.055907  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.428586  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.481082  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.549349  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.929024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.980094  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.049047  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.431103  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.480475  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.549503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.837922  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:51.929098  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.985759  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.051760  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.429677  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.480886  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.549039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.930278  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.981010  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.049546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.257837  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.419863779s)
	W1018 12:01:53.257928  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.257962  838384 retry.go:31] will retry after 22.303674936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.428816  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.481322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.549342  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.931755  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.981558  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.049443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.430288  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.549297  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.928602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.980986  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.049290  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.429448  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.481581  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.549440  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.929401  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.980439  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.049558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.502856  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.503643  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.549688  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.981333  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.981570  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.049930  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.428413  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.480953  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.549886  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.929377  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.980404  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.049753  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.432621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.480756  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.549719  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.929420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.980375  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.049250  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.428644  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.480847  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.549883  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.928356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.980048  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.051534  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.430130  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.481277  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.549945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.928372  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.980700  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.049545  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.429844  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.551570  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.929456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.982282  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.049630  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.430524  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.481077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.549352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.928839  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.986842  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.049134  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.429457  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.480188  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.549085  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.928535  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.980769  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.048580  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.433304  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.480494  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.550096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.930036  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.980758  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.050726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.429347  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.480398  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.550148  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.928153  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.980059  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.048983  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.428741  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.481118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.548947  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.928168  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.980304  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.049241  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.428880  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.549892  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.981385  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.049855  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.430001  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.480903  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.548710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.932157  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.980207  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.052710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.458645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.502292  838384 kapi.go:107] duration metric: took 1m12.025397786s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:02:09.550352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.930139  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.049906  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.428576  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.550008  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.928058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.049512  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.430390  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.552726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.929915  838384 kapi.go:107] duration metric: took 1m10.504798048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:02:12.049079  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.550257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.050305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.549966  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.049761  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.549742  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.050822  838384 kapi.go:107] duration metric: took 1m10.505062422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:02:15.055403  838384 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-481299 cluster.
	I1018 12:02:15.059410  838384 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:02:15.062510  838384 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:02:15.562479  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:16.467612  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:16.467639  838384 retry.go:31] will retry after 33.846916648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:50.315580  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:51.158051  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:51.158143  838384 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:02:51.163303  838384 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, nvidia-device-plugin, registry-creds, ingress-dns, volcano, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:02:51.167339  838384 addons.go:514] duration metric: took 2m3.15084241s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher nvidia-device-plugin registry-creds ingress-dns volcano metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:02:51.167414  838384 start.go:246] waiting for cluster config update ...
	I1018 12:02:51.167438  838384 start.go:255] writing updated cluster config ...
	I1018 12:02:51.167767  838384 ssh_runner.go:195] Run: rm -f paused
	I1018 12:02:51.172083  838384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:51.175895  838384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.182054  838384 pod_ready.go:94] pod "coredns-66bc5c9577-rz5sw" is "Ready"
	I1018 12:02:51.182084  838384 pod_ready.go:86] duration metric: took 6.16488ms for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.184338  838384 pod_ready.go:83] waiting for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.189308  838384 pod_ready.go:94] pod "etcd-addons-481299" is "Ready"
	I1018 12:02:51.189338  838384 pod_ready.go:86] duration metric: took 4.974639ms for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.191936  838384 pod_ready.go:83] waiting for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.196757  838384 pod_ready.go:94] pod "kube-apiserver-addons-481299" is "Ready"
	I1018 12:02:51.196779  838384 pod_ready.go:86] duration metric: took 4.818772ms for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.199224  838384 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.576040  838384 pod_ready.go:94] pod "kube-controller-manager-addons-481299" is "Ready"
	I1018 12:02:51.576065  838384 pod_ready.go:86] duration metric: took 376.813094ms for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.776236  838384 pod_ready.go:83] waiting for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.176297  838384 pod_ready.go:94] pod "kube-proxy-9r8gt" is "Ready"
	I1018 12:02:52.176326  838384 pod_ready.go:86] duration metric: took 400.06131ms for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.376266  838384 pod_ready.go:83] waiting for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776669  838384 pod_ready.go:94] pod "kube-scheduler-addons-481299" is "Ready"
	I1018 12:02:52.776749  838384 pod_ready.go:86] duration metric: took 400.404692ms for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776777  838384 pod_ready.go:40] duration metric: took 1.604654564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:52.836011  838384 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:02:52.839584  838384 out.go:179] * Done! kubectl is now configured to use "addons-481299" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 18 12:02:00 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:00Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5: Status: Downloaded newer image for registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Oct 18 12:02:00 addons-481299 dockerd[1124]: time="2025-10-18T12:02:00.771332019Z" level=warning msg="reference for unknown type: " digest="sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0" remote="registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Oct 18 12:02:00 addons-481299 dockerd[1124]: time="2025-10-18T12:02:00.841164933Z" level=info msg="ignoring event" container=1d4d4fefc0e588588a10bf317f016660bf55523ba34c67653b7bf78800f9c254 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:02:01 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:01Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Oct 18 12:02:01 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/431b9cc69faad0f2ce8d0f026327bf5a50ce7b80b39d59712187003220a84ee7/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:02:01 addons-481299 dockerd[1124]: time="2025-10-18T12:02:01.900471353Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Oct 18 12:02:02 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e81033eb8a7243596bc17ca88f8c7c1440cc2fe8e6e785cc5efe90dbcabd9f11/resolv.conf as [nameserver 10.96.0.10 search volcano-system.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:02:03 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:03Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Oct 18 12:02:03 addons-481299 dockerd[1124]: time="2025-10-18T12:02:03.450205331Z" level=warning msg="reference for unknown type: " digest="sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd" remote="registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd"
	Oct 18 12:02:08 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:08Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.13.3@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd"
	Oct 18 12:02:08 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f18b4c784d72f76b004d8b1e3f39f6d3cec7b40ce8e7954bf65b0249e75902d0/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:02:08 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:08Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
	Oct 18 12:02:08 addons-481299 dockerd[1124]: time="2025-10-18T12:02:08.841074157Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Oct 18 12:02:10 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:10Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Oct 18 12:02:10 addons-481299 dockerd[1124]: time="2025-10-18T12:02:10.186053351Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 18 12:02:13 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:02:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 18 12:03:11 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:03:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4728e1e8fdeb4f2d2427239257541570640c3af9cb2686fe23d430ebc3be904a/resolv.conf as [nameserver 10.96.0.10 search my-volcano.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:03:11 addons-481299 dockerd[1124]: time="2025-10-18T12:03:11.735106928Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:03:11 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:03:11Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	Oct 18 12:03:26 addons-481299 dockerd[1124]: time="2025-10-18T12:03:26.793570306Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:03:52 addons-481299 dockerd[1124]: time="2025-10-18T12:03:52.918607856Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:03:52 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:03:52Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	Oct 18 12:04:42 addons-481299 dockerd[1124]: time="2025-10-18T12:04:42.820080424Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:06:07 addons-481299 dockerd[1124]: time="2025-10-18T12:06:07.927937923Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:06:07 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:06:07Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8ba5f0c67ede0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 3 minutes ago       Running             gcp-auth                                 0                   f18b4c784d72f       gcp-auth-78565c9fb4-vt5q2                   gcp-auth
	4d48c965fce74       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	15d05a6dc59f3       volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001                                         4 minutes ago       Running             admission                                0                   e81033eb8a724       volcano-admission-6c447bd768-27fgr          volcano-system
	8a05108e7b43d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             4 minutes ago       Running             controller                               0                   431b9cc69faad       ingress-nginx-controller-675c5ddd98-285md   ingress-nginx
	f83395435c560       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago       Running             csi-provisioner                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	872fdd27e58a8       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago       Running             liveness-probe                           0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	bfe8ec14dbc5b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago       Running             hostpath                                 0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	fece67ae97aa8       9a80c0c8eb61c                                                                                                                                4 minutes ago       Exited              patch                                    2                   1d4d4fefc0e58       ingress-nginx-admission-patch-2wvtk         ingress-nginx
	3c690158312df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago       Running             node-driver-registrar                    0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	9d6a1790759ab       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago       Running             csi-external-health-monitor-controller   0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	70b8546c3bbc2       volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34                                               4 minutes ago       Running             volcano-scheduler                        0                   055518d0670ce       volcano-scheduler-76c996c8bf-r9ldz          volcano-system
	59681c39be97d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago       Running             csi-resizer                              0                   63f35a2a9593c       csi-hostpath-resizer-0                      kube-system
	e3a81c28c5300       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago       Running             csi-attacher                             0                   0a72279b138c8       csi-hostpath-attacher-0                     kube-system
	6cb3afa4498f1       volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242                                      4 minutes ago       Running             volcano-controllers                      0                   7fba68872505f       volcano-controllers-6fd4f85cb8-bqlv2        volcano-system
	53b6871ca3d73       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   4 minutes ago       Exited              create                                   0                   a56a89e8a43cc       ingress-nginx-admission-create-xtv6g        ingress-nginx
	81902e2733f2b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd                            4 minutes ago       Running             gadget                                   0                   680d644fbcd5c       gadget-fg7d4                                gadget
	3f96c18e36d8d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   0609845efda87       snapshot-controller-7d9fbc56b8-xfp9p        kube-system
	e1a780c238dba       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   f7f6e990ad6a6       snapshot-controller-7d9fbc56b8-24v9j        kube-system
	b202ac5239a46       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        4 minutes ago       Running             metrics-server                           0                   866ba49820966       metrics-server-85b7d694d7-7dj5n             kube-system
	3315bc7f0cf33       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       4 minutes ago       Running             local-path-provisioner                   0                   37d7d55385491       local-path-provisioner-648f6765c9-cdrdl     local-path-storage
	2feb79de871cb       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago       Running             yakd                                     0                   c21f466aaf970       yakd-dashboard-5ff678cb9-nxvml              yakd-dashboard
	eab72ef0e77c8       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         4 minutes ago       Running             minikube-ingress-dns                     0                   50079659ba367       kube-ingress-dns-minikube                   kube-system
	235f5dafa7720       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              4 minutes ago       Running             registry-proxy                           0                   73afb7c260eec       registry-proxy-mx9vb                        kube-system
	1c5375e320408       registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e                                                             4 minutes ago       Running             registry                                 0                   8af20fd6a09b7       registry-6b586f9694-zqqmr                   kube-system
	b7a4f459688f4       gcr.io/cloud-spanner-emulator/emulator@sha256:335f6daa572494373ab0e16f6f574aced7425f3755182faf42089f838d6f38e1                               4 minutes ago       Running             cloud-spanner-emulator                   0                   e32407548d49a       cloud-spanner-emulator-86bd5cbb97-dzsz8     default
	2be9853689935       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   8f163548644e0       nvidia-device-plugin-daemonset-j66tb        kube-system
	25c9372900a26       ba04bb24b9575                                                                                                                                5 minutes ago       Running             storage-provisioner                      0                   4004c0456a8a4       storage-provisioner                         kube-system
	5797fb80a206f       138784d87c9c5                                                                                                                                5 minutes ago       Running             coredns                                  0                   5fe43b09fa9ef       coredns-66bc5c9577-rz5sw                    kube-system
	dae56d7c71c17       05baa95f5142d                                                                                                                                5 minutes ago       Running             kube-proxy                               0                   ecfef3e21c7ae       kube-proxy-9r8gt                            kube-system
	dba8e74f2f998       b5f57ec6b9867                                                                                                                                5 minutes ago       Running             kube-scheduler                           0                   5c4a90f92f416       kube-scheduler-addons-481299                kube-system
	c52ddb45ce912       7eb2c6ff0c5a7                                                                                                                                5 minutes ago       Running             kube-controller-manager                  0                   2332636f8cc54       kube-controller-manager-addons-481299       kube-system
	9523d2b24f1a6       a1894772a478e                                                                                                                                5 minutes ago       Running             etcd                                     0                   ea5e934575f93       etcd-addons-481299                          kube-system
	48f480a306b71       43911e833d64d                                                                                                                                5 minutes ago       Running             kube-apiserver                           0                   ddf248b652d44       kube-apiserver-addons-481299                kube-system
	
	
	==> controller_ingress [8a05108e7b43] <==
	I1018 12:02:08.924425       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1018 12:02:09.031156       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1018 12:02:09.043091       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1018 12:02:09.058313       6 nginx.go:273] "Starting NGINX Ingress controller"
	I1018 12:02:09.082853       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"42fc76e3-0c78-4800-9165-c304c60bade9", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1018 12:02:09.082897       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"bad76739-6da4-44ee-ad7e-7821403e700d", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1018 12:02:09.082907       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"46559a9e-dfe6-4342-b6c4-a2e0d79ffec0", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1018 12:02:10.260707       6 nginx.go:319] "Starting NGINX process"
	I1018 12:02:10.260918       6 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1018 12:02:10.261398       6 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1018 12:02:10.261848       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:02:10.271851       6 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1018 12:02:10.273337       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-285md"
	I1018 12:02:10.280403       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	I1018 12:02:10.330867       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:02:10.330933       6 controller.go:240] "Initial sync, sleeping for 1 second"
	I1018 12:02:10.331249       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1018 12:02:10.381659       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	  Release:       v1.13.3
	  Build:         93851f05e61d99eea49140c9be73499a3cb92ccc
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [5797fb80a206] <==
	[INFO] 10.244.0.7:54253 - 21415 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000086647s
	[INFO] 10.244.0.7:54253 - 60075 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001873588s
	[INFO] 10.244.0.7:54253 - 32829 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002752352s
	[INFO] 10.244.0.7:54253 - 48434 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000139218s
	[INFO] 10.244.0.7:54253 - 23824 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103517s
	[INFO] 10.244.0.7:35646 - 48715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276574s
	[INFO] 10.244.0.7:35646 - 52325 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112625s
	[INFO] 10.244.0.7:45943 - 64938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090488s
	[INFO] 10.244.0.7:45943 - 65152 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099734s
	[INFO] 10.244.0.7:33837 - 21640 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105166s
	[INFO] 10.244.0.7:33837 - 21813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009532s
	[INFO] 10.244.0.7:58089 - 39529 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295941s
	[INFO] 10.244.0.7:58089 - 39709 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001006773s
	[INFO] 10.244.0.7:49666 - 228 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115406s
	[INFO] 10.244.0.7:49666 - 392 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093999s
	[INFO] 10.244.0.26:58341 - 57714 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271404s
	[INFO] 10.244.0.26:33384 - 49094 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121051s
	[INFO] 10.244.0.26:51350 - 39772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197935s
	[INFO] 10.244.0.26:51818 - 1006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135993s
	[INFO] 10.244.0.26:60342 - 22606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138184s
	[INFO] 10.244.0.26:55393 - 64846 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151477s
	[INFO] 10.244.0.26:33581 - 50204 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064442s
	[INFO] 10.244.0.26:57107 - 22259 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002808697s
	[INFO] 10.244.0.26:34046 - 55453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002593408s
	[INFO] 10.244.0.26:55019 - 57919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003283041s
	
	
	==> describe nodes <==
	Name:               addons-481299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-481299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-481299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-481299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-481299"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:00:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-481299
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:06:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:02:46 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:02:46 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:02:46 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:02:46 +0000   Sat, 18 Oct 2025 12:00:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-481299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9d0b013c-4a29-4459-8491-dd5d81ae22da
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-86bd5cbb97-dzsz8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  gadget                      gadget-fg7d4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  gcp-auth                    gcp-auth-78565c9fb4-vt5q2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-285md    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m14s
	  kube-system                 coredns-66bc5c9577-rz5sw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m23s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 csi-hostpathplugin-c4hl2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 etcd-addons-481299                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m30s
	  kube-system                 kube-apiserver-addons-481299                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-addons-481299        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-9r8gt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-addons-481299                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 metrics-server-85b7d694d7-7dj5n              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m15s
	  kube-system                 nvidia-device-plugin-daemonset-j66tb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 registry-6b586f9694-zqqmr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 registry-creds-764b6fb674-mrx54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 registry-proxy-mx9vb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-24v9j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 snapshot-controller-7d9fbc56b8-xfp9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  local-path-storage          local-path-provisioner-648f6765c9-cdrdl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  my-volcano                  test-job-nginx-0                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  volcano-system              volcano-admission-6c447bd768-27fgr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  volcano-system              volcano-controllers-6fd4f85cb8-bqlv2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  volcano-system              volcano-scheduler-76c996c8bf-r9ldz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nxvml               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             588Mi (7%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m21s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m37s (x8 over 5m38s)  kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m37s (x8 over 5m38s)  kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m37s (x7 over 5m38s)  kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m29s                  kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m29s                  kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m29s                  kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m25s                  node-controller  Node addons-481299 event: Registered Node addons-481299 in Controller
	  Normal   NodeReady                5m25s                  kubelet          Node addons-481299 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	[Oct18 10:38] overlayfs: idmapped layers are currently not supported
	[ +31.744006] overlayfs: idmapped layers are currently not supported
	[Oct18 10:40] overlayfs: idmapped layers are currently not supported
	[  +7.638461] overlayfs: idmapped layers are currently not supported
	[Oct18 10:42] overlayfs: idmapped layers are currently not supported
	[ +14.986088] overlayfs: idmapped layers are currently not supported
	[Oct18 10:44] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:59] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9523d2b24f1a] <==
	{"level":"warn","ts":"2025-10-18T12:00:38.303097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.334334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.359427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.390619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.407793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.439357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.481909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.499910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.531023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.698131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.110540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.132268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.020980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.073818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.097695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.122888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.166204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.187897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.301315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.414291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.428689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.446079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.463288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.498849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.513965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8ba5f0c67ede] <==
	2025/10/18 12:02:13 GCP Auth Webhook started!
	2025/10/18 12:03:09 Ready to marshal response ...
	2025/10/18 12:03:09 Ready to write response ...
	2025/10/18 12:03:09 Ready to marshal response ...
	2025/10/18 12:03:09 Ready to write response ...
	
	
	==> kernel <==
	 12:06:11 up  3:48,  0 user,  load average: 1.83, 2.07, 3.04
	Linux addons-481299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [48f480a306b7] <==
	W1018 12:01:17.512420       1 logging.go:55] [core] [Channel #318 SubChannel #319]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1018 12:01:50.694517       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.253.132:443: connect: connection refused" logger="UnhandledError"
	W1018 12:01:50.695079       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:01:50.695174       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 12:01:50.695774       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.253.132:443: connect: connection refused" logger="UnhandledError"
	E1018 12:01:50.703527       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.253.132:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.253.132:443: connect: connection refused" logger="UnhandledError"
	I1018 12:01:50.805045       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1018 12:01:55.712334       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:01:56.735971       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:01:57.836237       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:01:58.879305       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:01:59.936802       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:00.948571       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:01.979892       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:03.081806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:04.116604       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:05.119843       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:06.126276       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:07.188042       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	W1018 12:02:08.283351       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.18.108:443: connect: connection refused
	I1018 12:03:09.463098       1 controller.go:667] quota admission added evaluator for: jobs.batch.volcano.sh
	I1018 12:03:09.504113       1 controller.go:667] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [c52ddb45ce91] <==
	I1018 12:00:47.000243       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:00:47.000365       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:00:47.000887       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:00:47.001068       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:00:47.001263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:00:47.002337       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:00:47.003912       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:00:47.012257       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1018 12:00:56.366824       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 12:00:56.395064       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 12:01:16.965166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:01:16.965320       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1018 12:01:16.965346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 12:01:16.965371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1018 12:01:16.965388       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1018 12:01:16.965406       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1018 12:01:16.965426       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1018 12:01:16.965447       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1018 12:01:16.965523       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 12:01:17.007073       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 12:01:17.016807       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 12:01:18.467379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:01:18.617923       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 12:01:48.486828       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:01:48.626811       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [dae56d7c71c1] <==
	I1018 12:00:49.514489       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:00:49.626939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:00:49.727608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:00:49.727648       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:00:49.727722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:00:49.750914       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:00:49.750964       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:00:49.754697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:00:49.754988       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:00:49.755003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:49.756307       1 config.go:200] "Starting service config controller"
	I1018 12:00:49.756319       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:00:49.756333       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:00:49.756337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:00:49.756361       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:00:49.756365       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:00:49.757732       1 config.go:309] "Starting node config controller"
	I1018 12:00:49.757742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:00:49.757747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:00:49.856694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:00:49.856705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:00:49.856722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dba8e74f2f99] <==
	I1018 12:00:40.296381       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:40.299837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:00:40.300872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:00:40.314385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:00:40.314937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:00:40.315179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:00:40.315409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:00:40.315594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:00:40.321949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:00:40.322190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:00:40.322439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:00:40.323011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:00:40.323347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:00:40.323536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:00:40.324267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:00:40.324450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:00:40.324637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:00:40.324822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:00:40.325293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:00:40.325523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:00:40.325545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:00:40.325592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 12:00:41.902177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:03:59 addons-481299 kubelet[2235]: I1018 12:03:59.575975    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j66tb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:04:04 addons-481299 kubelet[2235]: E1018 12:04:04.577617    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:04:16 addons-481299 kubelet[2235]: E1018 12:04:16.578131    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:04:31 addons-481299 kubelet[2235]: E1018 12:04:31.575873    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:04:42 addons-481299 kubelet[2235]: E1018 12:04:42.823651    2235 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 18 12:04:42 addons-481299 kubelet[2235]: E1018 12:04:42.823707    2235 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 18 12:04:42 addons-481299 kubelet[2235]: E1018 12:04:42.823781    2235 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(211aca47-df9e-41d1-a403-e77c9d85646f): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:04:42 addons-481299 kubelet[2235]: E1018 12:04:42.823813    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:04:54 addons-481299 kubelet[2235]: E1018 12:04:54.576152    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:04:57 addons-481299 kubelet[2235]: I1018 12:04:57.575712    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zqqmr" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:05:02 addons-481299 kubelet[2235]: E1018 12:05:02.909885    2235 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 12:05:02 addons-481299 kubelet[2235]: E1018 12:05:02.909988    2235 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7e18bec-e709-4ce6-8eb5-addfe6ca58a2-gcr-creds podName:e7e18bec-e709-4ce6-8eb5-addfe6ca58a2 nodeName:}" failed. No retries permitted until 2025-10-18 12:07:04.909969769 +0000 UTC m=+382.470945713 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e7e18bec-e709-4ce6-8eb5-addfe6ca58a2-gcr-creds") pod "registry-creds-764b6fb674-mrx54" (UID: "e7e18bec-e709-4ce6-8eb5-addfe6ca58a2") : secret "registry-creds-gcr" not found
	Oct 18 12:05:05 addons-481299 kubelet[2235]: E1018 12:05:05.575756    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:05:08 addons-481299 kubelet[2235]: I1018 12:05:08.575440    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mx9vb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:05:11 addons-481299 kubelet[2235]: I1018 12:05:11.576318    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j66tb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:05:13 addons-481299 kubelet[2235]: E1018 12:05:13.581861    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-mrx54" podUID="e7e18bec-e709-4ce6-8eb5-addfe6ca58a2"
	Oct 18 12:05:16 addons-481299 kubelet[2235]: E1018 12:05:16.583373    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:05:30 addons-481299 kubelet[2235]: E1018 12:05:30.576214    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:05:44 addons-481299 kubelet[2235]: E1018 12:05:44.575677    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:05:55 addons-481299 kubelet[2235]: E1018 12:05:55.576399    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	Oct 18 12:05:58 addons-481299 kubelet[2235]: I1018 12:05:58.578672    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zqqmr" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:06:07 addons-481299 kubelet[2235]: E1018 12:06:07.931631    2235 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 18 12:06:07 addons-481299 kubelet[2235]: E1018 12:06:07.931680    2235 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 18 12:06:07 addons-481299 kubelet[2235]: E1018 12:06:07.931748    2235 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(211aca47-df9e-41d1-a403-e77c9d85646f): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:06:07 addons-481299 kubelet[2235]: E1018 12:06:07.931778    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="211aca47-df9e-41d1-a403-e77c9d85646f"
	
	
	==> storage-provisioner [25c9372900a2] <==
	W1018 12:05:47.513902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:49.518247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:49.522925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:51.526474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:51.533735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:53.536998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:53.541636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:55.545153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:55.552262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:57.555481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:57.560434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:59.564544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:05:59.571669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:01.574859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:01.586731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:03.589596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:03.594802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:05.597988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:05.604940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:07.607993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:07.612794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:09.615897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:09.623101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:11.629263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:06:11.637099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
helpers_test.go:269: (dbg) Run:  kubectl --context addons-481299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 test-job-nginx-0
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-481299 describe pod ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 test-job-nginx-0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-481299 describe pod ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 test-job-nginx-0: exit status 1 (111.00865ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xtv6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wvtk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-mrx54" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-481299 describe pod ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 test-job-nginx-0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable volcano --alsologtostderr -v=1: (11.835047859s)
--- FAIL: TestAddons/serial/Volcano (211.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-481299 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-481299 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-481299 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-18 12:21:23.111697314 +0000 UTC m=+1311.234419930
addons_test.go:252: (dbg) Run:  kubectl --context addons-481299 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-481299 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-481299/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:13:22 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.35
IPs:
IP:  10.244.0.35
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9dmf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l9dmf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-481299
Warning  Failed     6m32s (x2 over 8m)      kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m3s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m3s (x5 over 8m)       kubelet            Error: ErrImagePull
Warning  Failed     5m3s (x3 over 7m47s)    kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m52s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m52s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-481299 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-481299 logs nginx -n default: exit status 1 (101.555849ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-481299 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-481299
helpers_test.go:243: (dbg) docker inspect addons-481299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	        "Created": "2025-10-18T12:00:12.386585691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:00:12.450161174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hosts",
	        "LogPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8-json.log",
	        "Name": "/addons-481299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-481299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-481299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	                "LowerDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc-init/diff:/var/lib/docker/overlay2/6da29ec63a7e3e82b22991cef36ed721f46b0e2f648634b12d0ad6b7225284d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-481299",
	                "Source": "/var/lib/docker/volumes/addons-481299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-481299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-481299",
	                "name.minikube.sigs.k8s.io": "addons-481299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f570c3a0d59e39fd3f8ba76488f6d4555992d509e8801ed716c6c97159646bf",
	            "SandboxKey": "/var/run/docker/netns/9f570c3a0d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-481299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:8f:15:9b:c7:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb7b19c2893329488850783d25e49e7f60d5e1cf75347f698d754ce5d4cf2448",
	                    "EndpointID": "51c67f4b258ce43a4b9996e94609bda24fc6b23ad970edc2979b013dbc0a113d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-481299",
	                        "7812e8ac5e4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-481299 -n addons-481299
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 logs -n 25: (1.176104142s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p download-docker-419204 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p download-docker-419204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p binary-mirror-415208 --alsologtostderr --binary-mirror http://127.0.0.1:32775 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p binary-mirror-415208                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ addons  │ enable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ addons  │ disable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ start   │ -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 12:02 UTC │
	│ addons  │ addons-481299 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ ip      │ addons-481299 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:07 UTC │
	│ addons  │ addons-481299 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:07 UTC │ 18 Oct 25 12:07 UTC │
	│ addons  │ addons-481299 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ addons  │ addons-481299 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ addons  │ enable headlamp -p addons-481299 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                             │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	│ addons  │ addons-481299 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:13 UTC │ 18 Oct 25 12:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:44.391839  838384 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:44.391981  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.391993  838384 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:44.391997  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.392266  838384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 11:59:44.392787  838384 out.go:368] Setting JSON to false
	I1018 11:59:44.393655  838384 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13335,"bootTime":1760775450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:44.393734  838384 start.go:141] virtualization:  
	I1018 11:59:44.397036  838384 out.go:179] * [addons-481299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 11:59:44.400845  838384 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:59:44.400912  838384 notify.go:220] Checking for updates...
	I1018 11:59:44.406563  838384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:44.409444  838384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:44.412417  838384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:44.415444  838384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 11:59:44.418427  838384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:59:44.421615  838384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:44.443038  838384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:44.443164  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.504142  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.494831106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.504254  838384 docker.go:318] overlay module found
	I1018 11:59:44.507452  838384 out.go:179] * Using the docker driver based on user configuration
	I1018 11:59:44.510245  838384 start.go:305] selected driver: docker
	I1018 11:59:44.510268  838384 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:44.510293  838384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:59:44.511015  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.564605  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.555845429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.564766  838384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:44.564999  838384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:59:44.567965  838384 out.go:179] * Using Docker driver with root privileges
	I1018 11:59:44.570848  838384 cni.go:84] Creating CNI manager for ""
	I1018 11:59:44.570930  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:59:44.570945  838384 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:59:44.571024  838384 start.go:349] cluster config:
	{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1018 11:59:44.574127  838384 out.go:179] * Starting "addons-481299" primary control-plane node in "addons-481299" cluster
	I1018 11:59:44.576970  838384 cache.go:123] Beginning downloading kic base image for docker with docker
	I1018 11:59:44.579989  838384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:59:44.582852  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 11:59:44.582920  838384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1018 11:59:44.582938  838384 cache.go:58] Caching tarball of preloaded images
	I1018 11:59:44.582927  838384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:59:44.583037  838384 preload.go:233] Found /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 11:59:44.583047  838384 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 11:59:44.583380  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 11:59:44.583404  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json: {Name:mkb94853570c970a5649e27444200bcb46f3c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:59:44.598730  838384 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:44.598847  838384 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:59:44.598874  838384 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:59:44.598880  838384 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:59:44.598888  838384 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:59:44.598893  838384 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:00:04.269349  838384 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:00:04.269387  838384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:00:04.269433  838384 start.go:360] acquireMachinesLock for addons-481299: {Name:mkd3a8e0953bb87092640ae884afd157e18dcff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:00:04.270184  838384 start.go:364] duration metric: took 728.139µs to acquireMachinesLock for "addons-481299"
	I1018 12:00:04.270232  838384 start.go:93] Provisioning new machine with config: &{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:04.270339  838384 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:00:04.273733  838384 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:00:04.273967  838384 start.go:159] libmachine.API.Create for "addons-481299" (driver="docker")
	I1018 12:00:04.274006  838384 client.go:168] LocalClient.Create starting
	I1018 12:00:04.274125  838384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem
	I1018 12:00:04.457164  838384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem
	I1018 12:00:06.171545  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:00:06.188036  838384 cli_runner.go:211] docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:00:06.188147  838384 network_create.go:284] running [docker network inspect addons-481299] to gather additional debugging logs...
	I1018 12:00:06.188169  838384 cli_runner.go:164] Run: docker network inspect addons-481299
	W1018 12:00:06.204439  838384 cli_runner.go:211] docker network inspect addons-481299 returned with exit code 1
	I1018 12:00:06.204474  838384 network_create.go:287] error running [docker network inspect addons-481299]: docker network inspect addons-481299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-481299 not found
	I1018 12:00:06.204503  838384 network_create.go:289] output of [docker network inspect addons-481299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-481299 not found
	
	** /stderr **
	I1018 12:00:06.204601  838384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:06.221696  838384 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74f90}
	I1018 12:00:06.221747  838384 network_create.go:124] attempt to create docker network addons-481299 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:00:06.221806  838384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-481299 addons-481299
	I1018 12:00:06.276048  838384 network_create.go:108] docker network addons-481299 192.168.49.0/24 created
	I1018 12:00:06.276085  838384 kic.go:121] calculated static IP "192.168.49.2" for the "addons-481299" container
	I1018 12:00:06.276177  838384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:00:06.289979  838384 cli_runner.go:164] Run: docker volume create addons-481299 --label name.minikube.sigs.k8s.io=addons-481299 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:00:06.312048  838384 oci.go:103] Successfully created a docker volume addons-481299
	I1018 12:00:06.312139  838384 cli_runner.go:164] Run: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:00:08.424976  838384 cli_runner.go:217] Completed: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112781921s)
	I1018 12:00:08.425013  838384 oci.go:107] Successfully prepared a docker volume addons-481299
	I1018 12:00:08.425034  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:08.425054  838384 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:00:08.425128  838384 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:00:12.313130  838384 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.887944574s)
	I1018 12:00:12.313163  838384 kic.go:203] duration metric: took 3.888105963s to extract preloaded images to volume ...
	W1018 12:00:12.313309  838384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:00:12.313422  838384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:00:12.370823  838384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-481299 --name addons-481299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-481299 --network addons-481299 --ip 192.168.49.2 --volume addons-481299:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:00:12.674830  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Running}}
	I1018 12:00:12.693179  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:12.716439  838384 cli_runner.go:164] Run: docker exec addons-481299 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:00:12.770464  838384 oci.go:144] the created container "addons-481299" has a running status.
	I1018 12:00:12.770498  838384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa...
	I1018 12:00:13.113159  838384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:00:13.151031  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.187260  838384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:00:13.187279  838384 kic_runner.go:114] Args: [docker exec --privileged addons-481299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:00:13.265766  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.292626  838384 machine.go:93] provisionDockerMachine start ...
	I1018 12:00:13.292734  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:13.317367  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:13.317685  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:13.317701  838384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:00:13.319021  838384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:00:16.468726  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.468765  838384 ubuntu.go:182] provisioning hostname "addons-481299"
	I1018 12:00:16.468854  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.485891  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.486196  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.486212  838384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-481299 && echo "addons-481299" | sudo tee /etc/hostname
	I1018 12:00:16.646538  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.646612  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.664006  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.664309  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.664343  838384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-481299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-481299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-481299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:00:16.809369  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:00:16.809393  838384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-835760/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-835760/.minikube}
	I1018 12:00:16.809411  838384 ubuntu.go:190] setting up certificates
	I1018 12:00:16.809435  838384 provision.go:84] configureAuth start
	I1018 12:00:16.809492  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:16.826275  838384 provision.go:143] copyHostCerts
	I1018 12:00:16.826387  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/ca.pem (1078 bytes)
	I1018 12:00:16.826516  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/cert.pem (1123 bytes)
	I1018 12:00:16.826581  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/key.pem (1679 bytes)
	I1018 12:00:16.826641  838384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem org=jenkins.addons-481299 san=[127.0.0.1 192.168.49.2 addons-481299 localhost minikube]
	I1018 12:00:17.507919  838384 provision.go:177] copyRemoteCerts
	I1018 12:00:17.507999  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:00:17.508042  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.525367  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:17.628972  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:00:17.646764  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:00:17.664067  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:00:17.681849  838384 provision.go:87] duration metric: took 872.389064ms to configureAuth
	I1018 12:00:17.681878  838384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:00:17.682093  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:17.682153  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.699224  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.699534  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.699549  838384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1018 12:00:17.845632  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1018 12:00:17.845657  838384 ubuntu.go:71] root file system type: overlay
	I1018 12:00:17.845771  838384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1018 12:00:17.845845  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.863079  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.863381  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.863474  838384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1018 12:00:18.022880  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1018 12:00:18.022967  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.040408  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:18.040739  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:18.040762  838384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1018 12:00:18.977721  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:17:10.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-18 12:00:18.020798373 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1018 12:00:18.977763  838384 machine.go:96] duration metric: took 5.685111923s to provisionDockerMachine
	I1018 12:00:18.977774  838384 client.go:171] duration metric: took 14.703758303s to LocalClient.Create
	I1018 12:00:18.977788  838384 start.go:167] duration metric: took 14.703822977s to libmachine.API.Create "addons-481299"
	I1018 12:00:18.977798  838384 start.go:293] postStartSetup for "addons-481299" (driver="docker")
	I1018 12:00:18.977811  838384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:00:18.977881  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:00:18.977924  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.994172  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.097162  838384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:00:19.100393  838384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:00:19.100424  838384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:00:19.100436  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/addons for local assets ...
	I1018 12:00:19.100511  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/files for local assets ...
	I1018 12:00:19.100544  838384 start.go:296] duration metric: took 122.736875ms for postStartSetup
	I1018 12:00:19.100868  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.116943  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 12:00:19.117256  838384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:00:19.117306  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.133849  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.234008  838384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:00:19.238363  838384 start.go:128] duration metric: took 14.968007424s to createHost
	I1018 12:00:19.238389  838384 start.go:83] releasing machines lock for "addons-481299", held for 14.968185396s
	I1018 12:00:19.238463  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.254388  838384 ssh_runner.go:195] Run: cat /version.json
	I1018 12:00:19.254441  838384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:00:19.254505  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.254443  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.276799  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.279431  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.472177  838384 ssh_runner.go:195] Run: systemctl --version
	I1018 12:00:19.478376  838384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:00:19.482387  838384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:00:19.482484  838384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:00:19.508985  838384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:00:19.509011  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.509045  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.509137  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.523297  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:00:19.531952  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:00:19.540691  838384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:00:19.540763  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:00:19.549594  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.558008  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:00:19.566758  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.576635  838384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:00:19.584880  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:00:19.593685  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:00:19.602453  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:00:19.611252  838384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:00:19.618753  838384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:00:19.626251  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:19.751139  838384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:00:19.836762  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.836852  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.836930  838384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1018 12:00:19.852094  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.864977  838384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:00:19.898620  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.911221  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:00:19.924203  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.938348  838384 ssh_runner.go:195] Run: which cri-dockerd
	I1018 12:00:19.942139  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1018 12:00:19.949360  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1018 12:00:19.962074  838384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1018 12:00:20.083838  838384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1018 12:00:20.212913  838384 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1018 12:00:20.213052  838384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1018 12:00:20.227027  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:00:20.239819  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:20.356987  838384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:00:20.730788  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:00:20.745019  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1018 12:00:20.760946  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:20.776122  838384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1018 12:00:20.908216  838384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1018 12:00:21.026034  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.144947  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1018 12:00:21.160912  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1018 12:00:21.173334  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.292303  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1018 12:00:21.361374  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:21.374804  838384 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1018 12:00:21.374947  838384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1018 12:00:21.379225  838384 start.go:563] Will wait 60s for crictl version
	I1018 12:00:21.379335  838384 ssh_runner.go:195] Run: which crictl
	I1018 12:00:21.382801  838384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:00:21.411496  838384 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1018 12:00:21.411621  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.433381  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.460386  838384 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1018 12:00:21.460510  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:21.476212  838384 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:00:21.480421  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.490334  838384 kubeadm.go:883] updating cluster {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:00:21.490454  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:21.490519  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.509291  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.509313  838384 docker.go:621] Images already preloaded, skipping extraction
	I1018 12:00:21.509377  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.528526  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.528551  838384 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:00:21.528561  838384 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1018 12:00:21.528654  838384 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-481299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:00:21.528727  838384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:00:21.581422  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:21.581453  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:21.581477  838384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:00:21.581497  838384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-481299 NodeName:addons-481299 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:00:21.581620  838384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-481299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:00:21.581689  838384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:00:21.589319  838384 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:00:21.589438  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:00:21.597066  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 12:00:21.609634  838384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:00:21.622567  838384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:00:21.635067  838384 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:00:21.638568  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.647824  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.763836  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:21.780526  838384 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299 for IP: 192.168.49.2
	I1018 12:00:21.780549  838384 certs.go:195] generating shared ca certs ...
	I1018 12:00:21.780565  838384 certs.go:227] acquiring lock for ca certs: {Name:mk64b6deab019965f0a71e524905de5b1147e674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:21.780707  838384 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key
	I1018 12:00:22.104055  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt ...
	I1018 12:00:22.104088  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt: {Name:mk219bca6d4f923342f3157b83b20704e67715ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.104280  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key ...
	I1018 12:00:22.104292  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key: {Name:mk5866d61f42a017eec142fb2b809b20ce393fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.105023  838384 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key
	I1018 12:00:22.483966  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt ...
	I1018 12:00:22.483996  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt: {Name:mk2555147315d465bb6cbbe0ada0da38d0982383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484782  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key ...
	I1018 12:00:22.484802  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key: {Name:mke70c887973fdba4f391cbb188fbbcbcc67feb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484942  838384 certs.go:257] generating profile certs ...
	I1018 12:00:22.485032  838384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key
	I1018 12:00:22.485075  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt with IP's: []
	I1018 12:00:22.766617  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt ...
	I1018 12:00:22.766652  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: {Name:mk71fca78b75ad1dca519de9fb0da28787da9922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.767459  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key ...
	I1018 12:00:22.767475  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key: {Name:mkfc16398f479a2713696abea6afcf11b3f1f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.768111  838384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943
	I1018 12:00:22.768134  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:00:23.220784  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 ...
	I1018 12:00:23.220818  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943: {Name:mk6c4c501d6b0340a440642541437e4d17340f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221595  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 ...
	I1018 12:00:23.221617  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943: {Name:mkcf265b1908bbb8a1e1337f558863cea9c9422c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221706  838384 certs.go:382] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt
	I1018 12:00:23.221802  838384 certs.go:386] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key
	I1018 12:00:23.221883  838384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key
	I1018 12:00:23.221908  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt with IP's: []
	I1018 12:00:24.966646  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt ...
	I1018 12:00:24.966676  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt: {Name:mkee15b7b76ca75c0e94c93d9f26d926d62d1534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.967526  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key ...
	I1018 12:00:24.967544  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key: {Name:mk9dc72444d85c193fa5f2a33fb60909bdb001f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.968384  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:00:24.968428  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:00:24.968456  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:00:24.968483  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem (1679 bytes)
	I1018 12:00:24.969022  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:00:24.988248  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:00:25.007149  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:00:25.033918  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:00:25.066985  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:00:25.087585  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:00:25.108125  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:00:25.128839  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:00:25.148548  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:00:25.168256  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:00:25.184446  838384 ssh_runner.go:195] Run: openssl version
	I1018 12:00:25.191471  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:00:25.200726  838384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205014  838384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:00 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205128  838384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.246539  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:00:25.255568  838384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:00:25.259344  838384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:00:25.259420  838384 kubeadm.go:400] StartCluster: {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:25.259592  838384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:00:25.275896  838384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:00:25.283941  838384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:00:25.291784  838384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:00:25.291878  838384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:00:25.299853  838384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:00:25.299873  838384 kubeadm.go:157] found existing configuration files:
	
	I1018 12:00:25.299925  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:00:25.307671  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:00:25.307740  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:00:25.315009  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:00:25.322614  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:00:25.322749  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:00:25.330039  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.337975  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:00:25.338088  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.346012  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:00:25.353964  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:00:25.354066  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:00:25.361694  838384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:00:25.404873  838384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:00:25.405115  838384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:00:25.430950  838384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:00:25.431100  838384 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:00:25.431181  838384 kubeadm.go:318] OS: Linux
	I1018 12:00:25.431259  838384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:00:25.431348  838384 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:00:25.431436  838384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:00:25.431543  838384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:00:25.431629  838384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:00:25.431723  838384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:00:25.431825  838384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:00:25.431922  838384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:00:25.432024  838384 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:00:25.495079  838384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:00:25.495226  838384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:00:25.495348  838384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:00:25.521654  838384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:00:25.528010  838384 out.go:252]   - Generating certificates and keys ...
	I1018 12:00:25.528156  838384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:00:25.528258  838384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:00:26.879731  838384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:00:26.999718  838384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:00:27.674514  838384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:00:27.895095  838384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:00:28.404519  838384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:00:28.404866  838384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:28.991527  838384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:00:28.991878  838384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:29.164485  838384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:00:29.695021  838384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:00:30.175093  838384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:00:30.176279  838384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:00:30.242007  838384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:00:30.627655  838384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:00:31.915030  838384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:00:32.022022  838384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:00:32.099687  838384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:00:32.100473  838384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:00:32.105382  838384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:00:32.109132  838384 out.go:252]   - Booting up control plane ...
	I1018 12:00:32.109259  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:00:32.109346  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:00:32.109424  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:00:32.126476  838384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:00:32.126811  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:00:32.134355  838384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:00:32.134771  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:00:32.135020  838384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:00:32.269048  838384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:00:32.269176  838384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:00:34.270707  838384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001804064s
	I1018 12:00:34.274174  838384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:00:34.274278  838384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:00:34.274378  838384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:00:34.274465  838384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:00:39.829992  838384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.555227625s
	I1018 12:00:40.310124  838384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.035887374s
	I1018 12:00:41.776311  838384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501922731s
	I1018 12:00:41.797119  838384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:00:41.811164  838384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:00:41.829757  838384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:00:41.829974  838384 kubeadm.go:318] [mark-control-plane] Marking the node addons-481299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:00:41.842508  838384 kubeadm.go:318] [bootstrap-token] Using token: 28o1tp.74tu7k9l138h9i1r
	I1018 12:00:41.845472  838384 out.go:252]   - Configuring RBAC rules ...
	I1018 12:00:41.845619  838384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:00:41.851305  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:00:41.861731  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:00:41.865572  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:00:41.869978  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:00:41.874041  838384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:00:42.183860  838384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:00:42.609144  838384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:00:43.183159  838384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:00:43.184156  838384 kubeadm.go:318] 
	I1018 12:00:43.184228  838384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:00:43.184234  838384 kubeadm.go:318] 
	I1018 12:00:43.184315  838384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:00:43.184319  838384 kubeadm.go:318] 
	I1018 12:00:43.184351  838384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:00:43.184413  838384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:00:43.184465  838384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:00:43.184469  838384 kubeadm.go:318] 
	I1018 12:00:43.184525  838384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:00:43.184530  838384 kubeadm.go:318] 
	I1018 12:00:43.184581  838384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:00:43.184586  838384 kubeadm.go:318] 
	I1018 12:00:43.184640  838384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:00:43.184718  838384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:00:43.184789  838384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:00:43.184808  838384 kubeadm.go:318] 
	I1018 12:00:43.184897  838384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:00:43.184977  838384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:00:43.184982  838384 kubeadm.go:318] 
	I1018 12:00:43.185069  838384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185176  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a \
	I1018 12:00:43.185221  838384 kubeadm.go:318] 	--control-plane 
	I1018 12:00:43.185227  838384 kubeadm.go:318] 
	I1018 12:00:43.185315  838384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:00:43.185320  838384 kubeadm.go:318] 
	I1018 12:00:43.185405  838384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185510  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a 
	I1018 12:00:43.189253  838384 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:00:43.189490  838384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:00:43.189602  838384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:00:43.189623  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:43.189644  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:43.192807  838384 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 12:00:43.195736  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 12:00:43.204431  838384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 12:00:43.217468  838384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:00:43.217531  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.217601  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-481299 minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-481299 minikube.k8s.io/primary=true
	I1018 12:00:43.380129  838384 ops.go:34] apiserver oom_adj: -16
	I1018 12:00:43.380250  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.880662  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.380452  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.880320  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.381247  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.881072  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.380646  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.880456  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.380377  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.880798  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:48.013839  838384 kubeadm.go:1113] duration metric: took 4.796362824s to wait for elevateKubeSystemPrivileges
	I1018 12:00:48.013872  838384 kubeadm.go:402] duration metric: took 22.754456109s to StartCluster
	I1018 12:00:48.013892  838384 settings.go:142] acquiring lock: {Name:mk03f4782566e111488ff6c3364f55783ae35e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014045  838384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:00:48.014571  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/kubeconfig: {Name:mk9c5565e95c70d1e94713bb6739a4c61bf95b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014840  838384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:48.014969  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:00:48.015286  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.015326  838384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:00:48.015411  838384 addons.go:69] Setting yakd=true in profile "addons-481299"
	I1018 12:00:48.015430  838384 addons.go:238] Setting addon yakd=true in "addons-481299"
	I1018 12:00:48.015461  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.016044  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.016527  838384 addons.go:69] Setting metrics-server=true in profile "addons-481299"
	I1018 12:00:48.016553  838384 addons.go:238] Setting addon metrics-server=true in "addons-481299"
	I1018 12:00:48.016589  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.017051  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.017907  838384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.019860  838384 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-481299"
	I1018 12:00:48.019957  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.021475  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019545  838384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.022344  838384 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-481299"
	I1018 12:00:48.022416  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.019668  838384 addons.go:69] Setting cloud-spanner=true in profile "addons-481299"
	I1018 12:00:48.024324  838384 addons.go:238] Setting addon cloud-spanner=true in "addons-481299"
	I1018 12:00:48.024374  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.024861  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.025713  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019679  838384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-481299"
	I1018 12:00:48.030476  838384 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:00:48.030552  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.031113  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019684  838384 addons.go:69] Setting default-storageclass=true in profile "addons-481299"
	I1018 12:00:48.041523  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-481299"
	I1018 12:00:48.042108  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019687  838384 addons.go:69] Setting gcp-auth=true in profile "addons-481299"
	I1018 12:00:48.050391  838384 mustload.go:65] Loading cluster: addons-481299
	I1018 12:00:48.050608  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.050867  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019691  838384 addons.go:69] Setting ingress=true in profile "addons-481299"
	I1018 12:00:48.072186  838384 addons.go:238] Setting addon ingress=true in "addons-481299"
	I1018 12:00:48.072238  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.072812  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.104085  838384 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:00:48.019694  838384 addons.go:69] Setting ingress-dns=true in profile "addons-481299"
	I1018 12:00:48.105418  838384 addons.go:238] Setting addon ingress-dns=true in "addons-481299"
	I1018 12:00:48.105466  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.105984  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.112477  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:00:48.112591  838384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:00:48.112702  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019703  838384 addons.go:69] Setting inspektor-gadget=true in profile "addons-481299"
	I1018 12:00:48.128479  838384 addons.go:238] Setting addon inspektor-gadget=true in "addons-481299"
	I1018 12:00:48.128547  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.129027  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.137976  838384 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:00:48.142238  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:00:48.142360  838384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:00:48.142534  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019834  838384 out.go:179] * Verifying Kubernetes components...
	I1018 12:00:48.020098  838384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-481299"
	I1018 12:00:48.152243  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-481299"
	I1018 12:00:48.152617  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020105  838384 addons.go:69] Setting registry=true in profile "addons-481299"
	I1018 12:00:48.157306  838384 addons.go:238] Setting addon registry=true in "addons-481299"
	I1018 12:00:48.157349  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.157826  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020108  838384 addons.go:69] Setting registry-creds=true in profile "addons-481299"
	I1018 12:00:48.184275  838384 addons.go:238] Setting addon registry-creds=true in "addons-481299"
	I1018 12:00:48.184329  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.184843  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.201404  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:48.237763  838384 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:00:48.240788  838384 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:48.240812  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:00:48.240888  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020111  838384 addons.go:69] Setting storage-provisioner=true in profile "addons-481299"
	I1018 12:00:48.255281  838384 addons.go:238] Setting addon storage-provisioner=true in "addons-481299"
	I1018 12:00:48.255324  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.255806  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.263489  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.267247  838384 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:00:48.020115  838384 addons.go:69] Setting volumesnapshots=true in profile "addons-481299"
	I1018 12:00:48.287811  838384 addons.go:238] Setting addon volumesnapshots=true in "addons-481299"
	I1018 12:00:48.287854  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.288331  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.297150  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:00:48.305304  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:00:48.307825  838384 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:48.307845  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:00:48.307915  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020121  838384 addons.go:69] Setting volcano=true in profile "addons-481299"
	I1018 12:00:48.309165  838384 addons.go:238] Setting addon volcano=true in "addons-481299"
	I1018 12:00:48.309239  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.309684  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.323250  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.327844  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:00:48.334482  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:00:48.338284  838384 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:00:48.340947  838384 addons.go:238] Setting addon default-storageclass=true in "addons-481299"
	I1018 12:00:48.340997  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.341575  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.353583  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.359214  838384 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:48.359297  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:00:48.359408  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.368078  838384 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:00:48.390679  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:00:48.394064  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:00:48.396983  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:00:48.397244  838384 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:00:48.406053  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:00:48.407309  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.409466  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:48.409487  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:00:48.409583  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.411154  838384 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:48.411801  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:00:48.411868  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.436870  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:48.436945  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:00:48.437058  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.470112  838384 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:00:48.476747  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:00:48.411213  838384 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:00:48.483079  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.485480  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:00:48.485501  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:00:48.485576  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.486853  838384 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-481299"
	I1018 12:00:48.486939  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.487597  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.517521  838384 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:00:48.517545  838384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:00:48.517619  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.537436  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.538328  838384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:48.538343  838384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:00:48.538398  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.553217  838384 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:00:48.554968  838384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:00:48.556537  838384 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:00:48.556742  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:00:48.556825  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.570246  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:48.570267  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:00:48.570329  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.589916  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.591402  838384 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:00:48.591565  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:00:48.600425  838384 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:00:48.600484  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:00:48.600494  838384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:00:48.600575  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.621433  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.622509  838384 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:00:48.640797  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.660645  838384 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:48.660673  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:00:48.660741  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.709393  838384 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:00:48.713361  838384 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:00:48.715118  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.737398  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.754446  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.759608  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.767028  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:48.767051  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:00:48.767120  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.774154  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.778151  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.785350  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.790983  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.816526  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.817012  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	W1018 12:00:48.829777  838384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:00:48.829816  838384 retry.go:31] will retry after 218.102369ms: ssh: handshake failed: EOF
	I1018 12:00:48.929211  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:00:48.930600  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:49.692319  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:00:49.692342  838384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:00:49.778672  838384 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:00:49.778699  838384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:00:49.799026  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:00:49.799099  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:00:49.810733  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:49.821114  838384 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:49.821260  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:00:49.834872  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:49.938418  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:49.941990  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:49.960460  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:00:49.960486  838384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:00:49.976966  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:00:49.976992  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:00:49.982393  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:50.005856  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:50.006119  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:00:50.006139  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:00:50.030431  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:50.033442  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:50.078822  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:00:50.078897  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:00:50.223993  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:50.246008  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:00:50.246050  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:00:50.280883  838384 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.280909  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:00:50.285103  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:50.292971  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:00:50.292997  838384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:00:50.305000  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:00:50.305028  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:00:50.307406  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:00:50.307430  838384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:00:50.427676  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:00:50.427702  838384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:00:50.443185  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.443211  838384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:00:50.467032  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:50.523115  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.551023  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.551047  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:00:50.599104  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:00:50.599134  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:00:50.673841  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.676793  838384 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:50.676869  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:00:50.686555  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.800152  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:00:50.800216  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:00:50.933424  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:51.038226  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:00:51.038306  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:00:51.103206  838384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.173952226s)
	I1018 12:00:51.103234  838384 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:00:51.104274  838384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.173645704s)
	I1018 12:00:51.104963  838384 node_ready.go:35] waiting up to 6m0s for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108529  838384 node_ready.go:49] node "addons-481299" is "Ready"
	I1018 12:00:51.108556  838384 node_ready.go:38] duration metric: took 3.569435ms for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108574  838384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:00:51.108657  838384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:00:51.393745  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.582925006s)
	I1018 12:00:51.555496  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:00:51.555567  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:00:51.607401  838384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-481299" context rescaled to 1 replicas
	I1018 12:00:52.004594  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:00:52.004679  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:00:52.455938  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.620977988s)
	I1018 12:00:52.518962  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:00:52.519035  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:00:52.943868  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:00:52.943948  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:00:53.278688  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:53.278715  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:00:53.606823  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:54.485982  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547484935s)
	I1018 12:00:55.375228  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.392799926s)
	I1018 12:00:55.381718  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.439683188s)
	I1018 12:00:55.687452  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:00:55.687560  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:55.715052  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:56.874895  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:00:57.069070  838384 addons.go:238] Setting addon gcp-auth=true in "addons-481299"
	I1018 12:00:57.069155  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:57.069643  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:57.097607  838384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:00:57.097663  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:57.125594  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:57.466799  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.460903197s)
	I1018 12:00:57.466833  838384 addons.go:479] Verifying addon ingress=true in "addons-481299"
	I1018 12:00:57.466986  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436528408s)
	I1018 12:00:57.467006  838384 addons.go:479] Verifying addon registry=true in "addons-481299"
	I1018 12:00:57.467280  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.433810994s)
	I1018 12:00:57.467337  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.243320649s)
	I1018 12:00:57.467373  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.182252425s)
	I1018 12:00:57.471733  838384 out.go:179] * Verifying registry addon...
	I1018 12:00:57.471837  838384 out.go:179] * Verifying ingress addon...
	I1018 12:00:57.475972  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:00:57.476891  838384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:00:57.480952  838384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:00:57.480978  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:57.481343  838384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:00:57.481364  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987158  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987606  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505049  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505238  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102780  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102881  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.537809  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.538274  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.983982  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.989640  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:00.494679  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:00.494785  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.150283  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.151501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.410549  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.94347934s)
	I1018 12:01:01.410647  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.88750974s)
	W1018 12:01:01.410663  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410677  838384 retry.go:31] will retry after 201.547914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410735  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.736813896s)
	I1018 12:01:01.410746  838384 addons.go:479] Verifying addon metrics-server=true in "addons-481299"
	I1018 12:01:01.410781  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.724138069s)
	I1018 12:01:01.410951  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.477459185s)
	W1018 12:01:01.415024  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.415053  838384 retry.go:31] will retry after 281.160141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.410970  838384 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.302296364s)
	I1018 12:01:01.415078  838384 api_server.go:72] duration metric: took 13.40020826s to wait for apiserver process to appear ...
	I1018 12:01:01.415084  838384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:01:01.415099  838384 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:01:01.411139  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.804213387s)
	I1018 12:01:01.415446  838384 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:01:01.411161  838384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.313535397s)
	I1018 12:01:01.419793  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:01.419828  838384 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-481299 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:01:01.419793  838384 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:01:01.425137  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:01:01.428507  838384 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:01:01.431298  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:01:01.431327  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:01:01.501730  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:01:01.501758  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:01:01.508482  838384 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:01:01.513158  838384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:01:01.513201  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:01.539110  838384 api_server.go:141] control plane version: v1.34.1
	I1018 12:01:01.539156  838384 api_server.go:131] duration metric: took 124.06524ms to wait for apiserver health ...
	I1018 12:01:01.539166  838384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:01:01.613319  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:01.627688  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.627710  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:01:01.665079  838384 system_pods.go:59] 18 kube-system pods found
	I1018 12:01:01.665176  838384 system_pods.go:61] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.665300  838384 system_pods.go:61] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.665325  838384 system_pods.go:61] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.665360  838384 system_pods.go:61] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.665384  838384 system_pods.go:61] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.665405  838384 system_pods.go:61] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.665426  838384 system_pods.go:61] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.665459  838384 system_pods.go:61] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.665480  838384 system_pods.go:61] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.665500  838384 system_pods.go:61] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.665522  838384 system_pods.go:61] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.665545  838384 system_pods.go:61] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.665567  838384 system_pods.go:61] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.665592  838384 system_pods.go:61] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.665624  838384 system_pods.go:61] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.665649  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665673  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665696  838384 system_pods.go:61] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.665719  838384 system_pods.go:74] duration metric: took 126.545392ms to wait for pod list to return data ...
	I1018 12:01:01.665740  838384 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:01:01.666428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.666860  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.697178  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:01.740057  838384 default_sa.go:45] found service account: "default"
	I1018 12:01:01.740128  838384 default_sa.go:55] duration metric: took 74.366134ms for default service account to be created ...
	I1018 12:01:01.740153  838384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:01:01.750661  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.822559  838384 system_pods.go:86] 18 kube-system pods found
	I1018 12:01:01.822661  838384 system_pods.go:89] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.822687  838384 system_pods.go:89] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.822728  838384 system_pods.go:89] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.822757  838384 system_pods.go:89] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.822780  838384 system_pods.go:89] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.822805  838384 system_pods.go:89] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.822838  838384 system_pods.go:89] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.822870  838384 system_pods.go:89] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.822891  838384 system_pods.go:89] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.822912  838384 system_pods.go:89] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.822945  838384 system_pods.go:89] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.822968  838384 system_pods.go:89] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.822993  838384 system_pods.go:89] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.823021  838384 system_pods.go:89] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.823053  838384 system_pods.go:89] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.823081  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823104  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823125  838384 system_pods.go:89] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.823163  838384 system_pods.go:126] duration metric: took 82.986663ms to wait for k8s-apps to be running ...
	I1018 12:01:01.823190  838384 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:01:01.823297  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:01.937035  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.020374  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.020832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.430107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.481721  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.482135  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.929149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.979391  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.981564  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.429257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.480836  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.481398  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:03.928420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.981042  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.981494  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.429897  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.480820  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.482111  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:04.534347  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.920939875s)
	W1018 12:01:04.534529  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534551  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.783796022s)
	I1018 12:01:04.534584  838384 retry.go:31] will retry after 344.775954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534505  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.837082754s)
	I1018 12:01:04.534652  838384 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.71132558s)
	I1018 12:01:04.534680  838384 system_svc.go:56] duration metric: took 2.711486896s WaitForService to wait for kubelet
	I1018 12:01:04.534699  838384 kubeadm.go:586] duration metric: took 16.519828999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:01:04.534730  838384 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:01:04.537903  838384 addons.go:479] Verifying addon gcp-auth=true in "addons-481299"
	I1018 12:01:04.538525  838384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:01:04.538550  838384 node_conditions.go:123] node cpu capacity is 2
	I1018 12:01:04.538562  838384 node_conditions.go:105] duration metric: took 3.804704ms to run NodePressure ...
	I1018 12:01:04.538574  838384 start.go:241] waiting for startup goroutines ...
	I1018 12:01:04.541908  838384 out.go:179] * Verifying gcp-auth addon...
	I1018 12:01:04.545758  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:01:04.549219  838384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:01:04.549243  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:04.879563  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:04.929625  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.979948  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.980174  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.049663  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.429694  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:05.481708  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.482094  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:05.550159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.929062  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.026330  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146729751s)
	W1018 12:01:06.026368  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.026387  838384 retry.go:31] will retry after 530.445094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.029966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.030602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.049648  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.428850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.480186  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.480559  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.549270  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.557566  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:06.928852  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.981051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.981269  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.049233  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.428928  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.480325  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.480920  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:07.549301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.666703  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109094231s)
	W1018 12:01:07.666792  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.666842  838384 retry.go:31] will retry after 603.526174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.928884  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.980605  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.981081  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.049925  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.271551  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:08.428903  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.480126  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.481567  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.550087  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.929709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.982028  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.982437  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.049266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.386734  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.115102938s)
	W1018 12:01:09.386771  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.386790  838384 retry.go:31] will retry after 1.698280239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.429436  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.480507  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:09.480673  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.548565  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.929443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.981023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.981260  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.050256  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.428811  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.480068  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.480432  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:10.549416  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.929547  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.980558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.981814  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.049021  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.085333  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:11.429636  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.480969  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.481107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.549095  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.929997  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.981838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.982268  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.050261  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.218914  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133541217s)
	W1018 12:01:12.218954  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.218972  838384 retry.go:31] will retry after 1.345701913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.429128  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.480634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.480946  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.548958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.929428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.980178  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.980839  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.048870  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.429927  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.478960  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:13.480243  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.550057  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.565398  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:13.929670  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.980749  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.981541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.050280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.430257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.481781  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.482216  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:14.549060  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.674833  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.10935166s)
	W1018 12:01:14.674911  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.674935  838384 retry.go:31] will retry after 4.015074409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.929000  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.980481  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.980936  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.049371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.428339  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.481378  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:15.549296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.928645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.979966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.980109  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.049992  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.429300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.481508  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.481893  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:16.549251  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.929096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.979808  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.981556  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.050187  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.428204  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.488687  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.490476  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.549747  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.930135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.979752  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.980568  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.050894  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.428555  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:18.481131  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:18.483023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.691077  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:18.933149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.030226  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.030879  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.049132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.430174  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.481455  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.481882  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.549279  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.871749  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180617611s)
	W1018 12:01:19.871793  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.871814  838384 retry.go:31] will retry after 2.515978912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.929529  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.980077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.980426  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.052024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.429464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.479838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.480286  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:20.548968  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.928596  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.981305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.981761  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.050329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.430266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.481117  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:21.482392  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.549618  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.930023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.980648  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.981714  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.051678  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.387962  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:22.428709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.480821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:22.481479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.549163  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.929296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.980516  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.982615  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.049456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.430314  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.483574  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.483954  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.548803  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.611192  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223187854s)
	W1018 12:01:23.611326  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.611398  838384 retry.go:31] will retry after 3.829328757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.930083  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.980905  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.983008  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.050301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.429404  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.480464  838384 kapi.go:107] duration metric: took 27.00449331s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:01:24.481254  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.549300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.933613  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.980927  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.050132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.428170  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.480118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.549464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.930020  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.980129  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.049622  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.429371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.480502  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.549703  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.929758  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.981438  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.049865  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.428814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.441124  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:27.480812  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.549252  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.928712  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.981084  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.049705  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.430621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.481300  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.550023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.635231  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194068908s)
	W1018 12:01:28.635269  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.635306  838384 retry.go:31] will retry after 12.348738913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.929164  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.980051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.049370  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.429546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.549608  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.928941  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.980182  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.053764  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.428514  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.480543  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.549832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.929633  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.980682  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.048503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.429926  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.480706  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.549798  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.981501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.050135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.430225  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.480064  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.549107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.930034  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.980601  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.050009  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.428665  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.481555  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.549517  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.930160  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.980721  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.049111  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.430075  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.480599  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.549824  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.928660  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.981106  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.049429  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.429715  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.480901  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.549527  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.928904  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.979975  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.049469  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.429388  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.480507  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.549383  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.928634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.980506  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.049878  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.428319  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.480864  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.549709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.933484  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.981299  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.050356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.432420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.531156  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.549592  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.929614  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.980821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.048908  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.432072  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:39.483680  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.549850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.930073  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.042288  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.051491  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.429568  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.529694  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.592500  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.930099  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.981977  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.984969  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:41.048958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.428951  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.480793  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:41.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.935106  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.980130  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.050058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.376984  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.391917505s)
	W1018 12:01:42.377022  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.377040  838384 retry.go:31] will retry after 9.460130047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.430039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.481400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.549280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.930471  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.981285  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.050674  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.429650  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.550049  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.928854  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.980322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.049881  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.429125  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.480353  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.551159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.928689  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.980964  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.084556  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.428634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.529995  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.548921  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.930616  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.981362  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.050003  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.429102  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.480267  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.549814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.928994  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.993938  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.049617  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.429329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:47.481657  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.550619  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.930499  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.032192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.054316  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.440479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.542668  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.550977  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.929346  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.980748  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.048945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.449438  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.498203  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.551135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.928541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.980400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.055907  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.428586  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.481082  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.549349  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.929024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.980094  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.049047  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.431103  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.480475  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.549503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.837922  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:51.929098  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.985759  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.051760  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.429677  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.480886  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.549039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.930278  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.981010  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.049546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.257837  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.419863779s)
	W1018 12:01:53.257928  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.257962  838384 retry.go:31] will retry after 22.303674936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.428816  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.481322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.549342  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.931755  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.981558  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.049443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.430288  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.549297  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.928602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.980986  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.049290  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.429448  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.481581  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.549440  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.929401  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.980439  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.049558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.502856  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.503643  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.549688  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.981333  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.981570  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.049930  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.428413  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.480953  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.549886  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.929377  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.980404  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.049753  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.432621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.480756  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.549719  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.929420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.980375  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.049250  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.428644  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.480847  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.549883  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.928356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.980048  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.051534  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.430130  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.481277  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.549945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.928372  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.980700  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.049545  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.429844  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.551570  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.929456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.982282  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.049630  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.430524  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.481077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.549352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.928839  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.986842  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.049134  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.429457  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.480188  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.549085  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.928535  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.980769  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.048580  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.433304  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.480494  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.550096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.930036  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.980758  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.050726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.429347  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.480398  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.550148  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.928153  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.980059  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.048983  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.428741  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.481118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.548947  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.928168  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.980304  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.049241  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.428880  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.549892  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.981385  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.049855  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.430001  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.480903  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.548710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.932157  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.980207  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.052710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.458645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.502292  838384 kapi.go:107] duration metric: took 1m12.025397786s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:02:09.550352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.930139  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.049906  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.428576  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.550008  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.928058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.049512  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.430390  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.552726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.929915  838384 kapi.go:107] duration metric: took 1m10.504798048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:02:12.049079  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.550257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.050305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.549966  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.049761  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.549742  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.050822  838384 kapi.go:107] duration metric: took 1m10.505062422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:02:15.055403  838384 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-481299 cluster.
	I1018 12:02:15.059410  838384 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:02:15.062510  838384 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:02:15.562479  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:16.467612  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:16.467639  838384 retry.go:31] will retry after 33.846916648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:50.315580  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:51.158051  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:51.158143  838384 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:02:51.163303  838384 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, nvidia-device-plugin, registry-creds, ingress-dns, volcano, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:02:51.167339  838384 addons.go:514] duration metric: took 2m3.15084241s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher nvidia-device-plugin registry-creds ingress-dns volcano metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:02:51.167414  838384 start.go:246] waiting for cluster config update ...
	I1018 12:02:51.167438  838384 start.go:255] writing updated cluster config ...
	I1018 12:02:51.167767  838384 ssh_runner.go:195] Run: rm -f paused
	I1018 12:02:51.172083  838384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:51.175895  838384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.182054  838384 pod_ready.go:94] pod "coredns-66bc5c9577-rz5sw" is "Ready"
	I1018 12:02:51.182084  838384 pod_ready.go:86] duration metric: took 6.16488ms for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.184338  838384 pod_ready.go:83] waiting for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.189308  838384 pod_ready.go:94] pod "etcd-addons-481299" is "Ready"
	I1018 12:02:51.189338  838384 pod_ready.go:86] duration metric: took 4.974639ms for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.191936  838384 pod_ready.go:83] waiting for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.196757  838384 pod_ready.go:94] pod "kube-apiserver-addons-481299" is "Ready"
	I1018 12:02:51.196779  838384 pod_ready.go:86] duration metric: took 4.818772ms for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.199224  838384 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.576040  838384 pod_ready.go:94] pod "kube-controller-manager-addons-481299" is "Ready"
	I1018 12:02:51.576065  838384 pod_ready.go:86] duration metric: took 376.813094ms for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.776236  838384 pod_ready.go:83] waiting for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.176297  838384 pod_ready.go:94] pod "kube-proxy-9r8gt" is "Ready"
	I1018 12:02:52.176326  838384 pod_ready.go:86] duration metric: took 400.06131ms for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.376266  838384 pod_ready.go:83] waiting for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776669  838384 pod_ready.go:94] pod "kube-scheduler-addons-481299" is "Ready"
	I1018 12:02:52.776749  838384 pod_ready.go:86] duration metric: took 400.404692ms for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776777  838384 pod_ready.go:40] duration metric: took 1.604654564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:52.836011  838384 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:02:52.839584  838384 out.go:179] * Done! kubectl is now configured to use "addons-481299" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.407932329Z" level=info msg="ignoring event" container=e3a81c28c53009dbece8c0425f8cabfc39afeaf7d7eb5eafc0056e57a5af036f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.460815245Z" level=info msg="ignoring event" container=59681c39be97dcfbdac9a59658f6b3236b56f61eca857703216c9d282160c9c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.494301036Z" level=info msg="ignoring event" container=f83395435c5607c377b4348ac86efcc4f864951fb3752f131ded23e3c0bab93a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.515987660Z" level=info msg="ignoring event" container=872fdd27e58a8c0f7bd47ed28263de4403f6ac73d2ca9759c27dfee8615be832 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.918588226Z" level=info msg="ignoring event" container=0a72279b138c8d49858b1dbc7246bfe07f3ef6c15e37e62d710b55896c6c3f2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.927263145Z" level=info msg="ignoring event" container=63f35a2a9593ce2aaa598e16302a1faf975919a6baa33c5fa42970eb98d4c58d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:12 addons-481299 dockerd[1124]: time="2025-10-18T12:13:12.953720152Z" level=info msg="ignoring event" container=a9040c2ca882ecfc7eff012dd95aef5bf4a4e902b2d5f74afadc2296c74a29db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:13 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:13Z" level=error msg="error getting RW layer size for container ID '4d48c965fce74779020e7ac5cffa7ccf7be2a2eef884d0762f8be29d7ff09b8f': Error response from daemon: No such container: 4d48c965fce74779020e7ac5cffa7ccf7be2a2eef884d0762f8be29d7ff09b8f"
	Oct 18 12:13:13 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:13Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d48c965fce74779020e7ac5cffa7ccf7be2a2eef884d0762f8be29d7ff09b8f'"
	Oct 18 12:13:13 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:13Z" level=error msg="error getting RW layer size for container ID 'f83395435c5607c377b4348ac86efcc4f864951fb3752f131ded23e3c0bab93a': Error response from daemon: No such container: f83395435c5607c377b4348ac86efcc4f864951fb3752f131ded23e3c0bab93a"
	Oct 18 12:13:13 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:13Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f83395435c5607c377b4348ac86efcc4f864951fb3752f131ded23e3c0bab93a'"
	Oct 18 12:13:23 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b0cd3c798ab947685121b1bfdd985ea0f06cea9f8f5733947502f8367a22a55/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:13:23 addons-481299 dockerd[1124]: time="2025-10-18T12:13:23.668949884Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:13:23 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:23Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 18 12:13:25 addons-481299 dockerd[1124]: time="2025-10-18T12:13:25.058039083Z" level=info msg="ignoring event" container=b202ac5239a46ed1909f3dcc3b74dd7ba7ecf80a0072a134f7bd8ac1450fde8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:25 addons-481299 dockerd[1124]: time="2025-10-18T12:13:25.184610208Z" level=info msg="ignoring event" container=866ba49820966a68b75b5c17fe5485dd0658cbeeebc3331f571a46ace13e16b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:36 addons-481299 dockerd[1124]: time="2025-10-18T12:13:36.821687590Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:14:03 addons-481299 dockerd[1124]: time="2025-10-18T12:14:03.806023437Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:14:51 addons-481299 dockerd[1124]: time="2025-10-18T12:14:51.888152629Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:14:51 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:14:51Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 18 12:16:20 addons-481299 dockerd[1124]: time="2025-10-18T12:16:20.778042963Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:17:51 addons-481299 dockerd[1124]: time="2025-10-18T12:17:51.894099998Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:17:51 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:17:51Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 18 12:19:01 addons-481299 dockerd[1124]: time="2025-10-18T12:19:01.915584136Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:19:01 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:19:01Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	589ac0aebe47c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          14 minutes ago      Running             busybox                   0                   f3d0a3d6da576       busybox                                     default
	8a05108e7b43d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             19 minutes ago      Running             controller                0                   431b9cc69faad       ingress-nginx-controller-675c5ddd98-285md   ingress-nginx
	fece67ae97aa8       9a80c0c8eb61c                                                                                                                19 minutes ago      Exited              patch                     2                   1d4d4fefc0e58       ingress-nginx-admission-patch-2wvtk         ingress-nginx
	53b6871ca3d73       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   19 minutes ago      Exited              create                    0                   a56a89e8a43cc       ingress-nginx-admission-create-xtv6g        ingress-nginx
	81902e2733f2b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd            19 minutes ago      Running             gadget                    0                   680d644fbcd5c       gadget-fg7d4                                gadget
	eab72ef0e77c8       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         19 minutes ago      Running             minikube-ingress-dns      0                   50079659ba367       kube-ingress-dns-minikube                   kube-system
	25c9372900a26       ba04bb24b9575                                                                                                                20 minutes ago      Running             storage-provisioner       0                   4004c0456a8a4       storage-provisioner                         kube-system
	5797fb80a206f       138784d87c9c5                                                                                                                20 minutes ago      Running             coredns                   0                   5fe43b09fa9ef       coredns-66bc5c9577-rz5sw                    kube-system
	dae56d7c71c17       05baa95f5142d                                                                                                                20 minutes ago      Running             kube-proxy                0                   ecfef3e21c7ae       kube-proxy-9r8gt                            kube-system
	dba8e74f2f998       b5f57ec6b9867                                                                                                                20 minutes ago      Running             kube-scheduler            0                   5c4a90f92f416       kube-scheduler-addons-481299                kube-system
	c52ddb45ce912       7eb2c6ff0c5a7                                                                                                                20 minutes ago      Running             kube-controller-manager   0                   2332636f8cc54       kube-controller-manager-addons-481299       kube-system
	9523d2b24f1a6       a1894772a478e                                                                                                                20 minutes ago      Running             etcd                      0                   ea5e934575f93       etcd-addons-481299                          kube-system
	48f480a306b71       43911e833d64d                                                                                                                20 minutes ago      Running             kube-apiserver            0                   ddf248b652d44       kube-apiserver-addons-481299                kube-system
	
	
	==> controller_ingress [8a05108e7b43] <==
	I1018 12:02:10.261398       6 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1018 12:02:10.261848       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:02:10.271851       6 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1018 12:02:10.273337       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-285md"
	I1018 12:02:10.280403       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	I1018 12:02:10.330867       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:02:10.330933       6 controller.go:240] "Initial sync, sleeping for 1 second"
	I1018 12:02:10.331249       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1018 12:02:10.381659       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	W1018 12:13:22.403084       6 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1018 12:13:22.404453       6 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1018 12:13:22.411715       6 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W1018 12:13:22.412037       6 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1018 12:13:22.412693       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:13:22.413594       6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"68687d3d-f77a-4191-af63-1e552c4b4beb", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3127", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I1018 12:13:22.474186       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:13:22.475251       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1018 12:13:25.746254       6 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1018 12:13:25.746969       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:13:25.789429       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:13:25.790142       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1018 12:13:29.079575       6 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1018 12:14:10.280747       6 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W1018 12:14:10.284960       6 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1018 12:14:10.285757       6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"68687d3d-f77a-4191-af63-1e552c4b4beb", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3242", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	
	==> coredns [5797fb80a206] <==
	[INFO] 10.244.0.7:54253 - 32829 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002752352s
	[INFO] 10.244.0.7:54253 - 48434 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000139218s
	[INFO] 10.244.0.7:54253 - 23824 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103517s
	[INFO] 10.244.0.7:35646 - 48715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276574s
	[INFO] 10.244.0.7:35646 - 52325 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112625s
	[INFO] 10.244.0.7:45943 - 64938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090488s
	[INFO] 10.244.0.7:45943 - 65152 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099734s
	[INFO] 10.244.0.7:33837 - 21640 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105166s
	[INFO] 10.244.0.7:33837 - 21813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009532s
	[INFO] 10.244.0.7:58089 - 39529 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295941s
	[INFO] 10.244.0.7:58089 - 39709 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001006773s
	[INFO] 10.244.0.7:49666 - 228 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115406s
	[INFO] 10.244.0.7:49666 - 392 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093999s
	[INFO] 10.244.0.26:58341 - 57714 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271404s
	[INFO] 10.244.0.26:33384 - 49094 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121051s
	[INFO] 10.244.0.26:51350 - 39772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197935s
	[INFO] 10.244.0.26:51818 - 1006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135993s
	[INFO] 10.244.0.26:60342 - 22606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138184s
	[INFO] 10.244.0.26:55393 - 64846 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151477s
	[INFO] 10.244.0.26:33581 - 50204 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064442s
	[INFO] 10.244.0.26:57107 - 22259 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002808697s
	[INFO] 10.244.0.26:34046 - 55453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002593408s
	[INFO] 10.244.0.26:55019 - 57919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003283041s
	[INFO] 10.244.0.29:34729 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227268s
	[INFO] 10.244.0.29:39903 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120166s
	
	
	==> describe nodes <==
	Name:               addons-481299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-481299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-481299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-481299
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:00:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-481299
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:21:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:03 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:03 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:03 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:03 +0000   Sat, 18 Oct 2025 12:00:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-481299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9d0b013c-4a29-4459-8491-dd5d81ae22da
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  gadget                      gadget-fg7d4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-285md    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-rz5sw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 etcd-addons-481299                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kube-apiserver-addons-481299                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-addons-481299        200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-9r8gt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-addons-481299                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (3%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m                kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node addons-481299 event: Registered Node addons-481299 in Controller
	  Normal   NodeReady                20m                kubelet          Node addons-481299 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	[Oct18 10:38] overlayfs: idmapped layers are currently not supported
	[ +31.744006] overlayfs: idmapped layers are currently not supported
	[Oct18 10:40] overlayfs: idmapped layers are currently not supported
	[  +7.638461] overlayfs: idmapped layers are currently not supported
	[Oct18 10:42] overlayfs: idmapped layers are currently not supported
	[ +14.986088] overlayfs: idmapped layers are currently not supported
	[Oct18 10:44] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:59] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9523d2b24f1a] <==
	{"level":"warn","ts":"2025-10-18T12:00:38.698131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.110540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.132268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.020980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.073818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.097695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.122888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.166204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.187897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.301315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.414291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.428689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.446079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.463288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.498849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.513965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:10:36.778316Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2025-10-18T12:10:36.842558Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1761,"took":"61.836891ms","hash":840814350,"current-db-size-bytes":10379264,"current-db-size":"10 MB","current-db-size-in-use-bytes":6549504,"current-db-size-in-use":"6.5 MB"}
	{"level":"info","ts":"2025-10-18T12:10:36.842623Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":840814350,"revision":1761,"compact-revision":-1}
	{"level":"info","ts":"2025-10-18T12:15:36.785445Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2622}
	{"level":"info","ts":"2025-10-18T12:15:36.808741Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2622,"took":"22.492864ms","hash":123848852,"current-db-size-bytes":10379264,"current-db-size":"10 MB","current-db-size-in-use-bytes":4071424,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-10-18T12:15:36.808809Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":123848852,"revision":2622,"compact-revision":1761}
	{"level":"info","ts":"2025-10-18T12:20:36.791725Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3352}
	{"level":"info","ts":"2025-10-18T12:20:36.812015Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3352,"took":"19.623694ms","hash":79906909,"current-db-size-bytes":10379264,"current-db-size":"10 MB","current-db-size-in-use-bytes":2650112,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-10-18T12:20:36.812071Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":79906909,"revision":3352,"compact-revision":2622}
	
	
	==> kernel <==
	 12:21:24 up  4:03,  0 user,  load average: 0.41, 0.50, 1.41
	Linux addons-481299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [48f480a306b7] <==
	W1018 12:06:15.954825       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1018 12:06:16.718440       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1018 12:06:17.004308       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1018 12:06:35.331248       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50972: use of closed network connection
	E1018 12:06:35.593799       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50998: use of closed network connection
	E1018 12:06:35.767189       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51012: use of closed network connection
	I1018 12:10:39.868349       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:13:00.804704       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.109.209"}
	I1018 12:13:10.556490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:13:10.556539       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:13:10.595627       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:13:10.595683       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:13:10.631293       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:13:10.631365       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:13:10.671606       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:13:10.671658       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:13:10.681002       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:13:10.681051       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 12:13:11.672440       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1018 12:13:11.682209       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 12:13:11.780521       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1018 12:13:22.405325       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 12:13:22.766798       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.111.161"}
	I1018 12:13:51.786871       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 12:20:39.869217       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c52ddb45ce91] <==
	E1018 12:20:46.263777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:20:47.018943       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:20:49.788550       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:20:49.789741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:20:51.672596       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:20:51.673883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:20:57.813715       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:20:57.815028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:01.275273       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:01.277151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:02.020064       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:21:03.062161       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:03.064368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:03.475768       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:03.477103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:04.923370       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:04.924428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:10.937248       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:10.938273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:12.132693       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:21:12.134013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:21:17.021170       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:21:20.166585       1 csi_attacher.go:520] kubernetes.io/csi: Attach timeout after 2m0s [volume=f72d827a-ac1a-11f0-8bc6-d6af51e085fe; attachment.ID=csi-e57a01e7ad295ea97ca707473a1f7662c399dc86362fccbcb7a5a253bd5489b1]
	E1018 12:21:20.166823       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^f72d827a-ac1a-11f0-8bc6-d6af51e085fe podName: nodeName:}" failed. No retries permitted until 2025-10-18 12:21:24.166798006 +0000 UTC m=+1249.063106184 (durationBeforeRetry 4s). Error: AttachVolume.Attach failed for volume "pvc-786575ea-ab84-44e2-b028-4c9bd16f705b" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^f72d827a-ac1a-11f0-8bc6-d6af51e085fe") from node "addons-481299" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume f72d827a-ac1a-11f0-8bc6-d6af51e085fe
	I1018 12:21:24.220598       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^f72d827a-ac1a-11f0-8bc6-d6af51e085fe" nodeName="addons-481299" scheduledPods=["default/task-pv-pod"]
	
	
	==> kube-proxy [dae56d7c71c1] <==
	I1018 12:00:49.514489       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:00:49.626939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:00:49.727608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:00:49.727648       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:00:49.727722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:00:49.750914       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:00:49.750964       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:00:49.754697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:00:49.754988       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:00:49.755003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:49.756307       1 config.go:200] "Starting service config controller"
	I1018 12:00:49.756319       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:00:49.756333       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:00:49.756337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:00:49.756361       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:00:49.756365       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:00:49.757732       1 config.go:309] "Starting node config controller"
	I1018 12:00:49.757742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:00:49.757747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:00:49.856694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:00:49.856705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:00:49.856722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dba8e74f2f99] <==
	I1018 12:00:40.301037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:00:40.314385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:00:40.314937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:00:40.315179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:00:40.315409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:00:40.315594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:00:40.321949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:00:40.322190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:00:40.322439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:00:40.323011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:00:40.323347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:00:40.323536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:00:40.324267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:00:40.324450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:00:40.324637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:00:40.324822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:00:40.325293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:00:40.325523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:00:40.325545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:00:40.325592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 12:00:41.902177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 12:17:07.710882       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-481299"
	E1018 12:17:07.710983       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	E1018 12:17:09.001766       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/test-local-path" status="not found"
	
	
	==> kubelet <==
	Oct 18 12:19:14 addons-481299 kubelet[2235]: E1018 12:19:14.576370    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:19:23 addons-481299 kubelet[2235]: E1018 12:19:23.577441    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:19:26 addons-481299 kubelet[2235]: E1018 12:19:26.575557    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:19:26 addons-481299 kubelet[2235]: I1018 12:19:26.576095    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:19:30 addons-481299 kubelet[2235]: W1018 12:19:30.443842    2235 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 18 12:19:37 addons-481299 kubelet[2235]: E1018 12:19:37.576017    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:19:38 addons-481299 kubelet[2235]: E1018 12:19:38.578129    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:19:49 addons-481299 kubelet[2235]: E1018 12:19:49.576583    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:19:51 addons-481299 kubelet[2235]: E1018 12:19:51.578265    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:03 addons-481299 kubelet[2235]: E1018 12:20:03.575777    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:20:04 addons-481299 kubelet[2235]: E1018 12:20:04.578559    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:16 addons-481299 kubelet[2235]: E1018 12:20:16.577995    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:18 addons-481299 kubelet[2235]: E1018 12:20:18.575949    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:20:27 addons-481299 kubelet[2235]: E1018 12:20:27.577991    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:29 addons-481299 kubelet[2235]: E1018 12:20:29.575642    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:20:36 addons-481299 kubelet[2235]: I1018 12:20:36.575722    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:20:39 addons-481299 kubelet[2235]: E1018 12:20:39.578015    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:43 addons-481299 kubelet[2235]: E1018 12:20:43.576162    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:20:54 addons-481299 kubelet[2235]: E1018 12:20:54.579131    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:20:55 addons-481299 kubelet[2235]: E1018 12:20:55.575746    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:21:06 addons-481299 kubelet[2235]: E1018 12:21:06.579427    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:21:08 addons-481299 kubelet[2235]: E1018 12:21:08.575926    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:21:15 addons-481299 kubelet[2235]: W1018 12:21:15.141160    2235 logging.go:55] [core] [Channel #84 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 18 12:21:19 addons-481299 kubelet[2235]: E1018 12:21:19.577429    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="dcae65b2-b4b5-4f3f-bee1-ed97ae7ab660"
	Oct 18 12:21:21 addons-481299 kubelet[2235]: E1018 12:21:21.575851    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	
	
	==> storage-provisioner [25c9372900a2] <==
	W1018 12:21:00.406044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:02.409215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:02.413800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:04.417858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:04.422767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:06.426634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:06.431330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:08.434809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:08.441568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:10.444839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:10.449252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:12.452790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:12.460129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:14.463436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:14.467765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:16.470934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:16.477513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:18.481221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:18.485967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:20.489864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:20.494334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:22.497834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:22.502797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:24.510886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:24.523427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
helpers_test.go:269: (dbg) Run:  kubectl --context addons-481299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-481299 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-481299 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk: exit status 1 (112.872151ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-481299/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:13:22 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9dmf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l9dmf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-481299
	  Warning  Failed     6m34s (x2 over 8m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m5s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m5s (x5 over 8m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m5s (x3 over 7m49s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m54s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m54s (x21 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-481299/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:07:07 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chd6h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-chd6h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           14m                   default-scheduler        Successfully assigned default/task-pv-pod to addons-481299
	  Warning  Failed              13m                   kubelet                  Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling             11m (x5 over 14m)     kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              11m (x4 over 14m)     kubelet                  Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              11m (x5 over 14m)     kubelet                  Error: ErrImagePull
	  Normal   BackOff             4m11s (x42 over 14m)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              4m11s (x42 over 14m)  kubelet                  Error: ImagePullBackOff
	  Warning  FailedAttachVolume  5s (x4 over 6m9s)     attachdetach-controller  AttachVolume.Attach failed for volume "pvc-786575ea-ab84-44e2-b028-4c9bd16f705b" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume f72d827a-ac1a-11f0-8bc6-d6af51e085fe
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9qfd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-q9qfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  4m18s  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded
	  Warning  FailedScheduling  4m16s  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. not found

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xtv6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wvtk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-481299 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable ingress-dns --alsologtostderr -v=1: (1.434383318s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable ingress --alsologtostderr -v=1: (7.888408705s)
--- FAIL: TestAddons/parallel/Ingress (492.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (381.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 12:06:56.506834  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 12:06:56.510744  837622 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 12:06:56.510774  837622 kapi.go:107] duration metric: took 7.444055ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.454698ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-481299 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-481299 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [cd2f2cff-a3be-4a44-84b1-1c62af359bbe] Pending
helpers_test.go:352: "task-pv-pod" [cd2f2cff-a3be-4a44-84b1-1c62af359bbe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-18 12:13:07.396308028 +0000 UTC m=+815.519030620
addons_test.go:567: (dbg) Run:  kubectl --context addons-481299 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-481299 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-481299/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:07:07 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chd6h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-chd6h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-481299
Warning  Failed     5m20s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m1s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m1s (x4 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m1s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    58s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     58s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-481299 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-481299 logs task-pv-pod -n default: exit status 1 (134.918685ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-481299 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-481299
helpers_test.go:243: (dbg) docker inspect addons-481299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	        "Created": "2025-10-18T12:00:12.386585691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:00:12.450161174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hosts",
	        "LogPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8-json.log",
	        "Name": "/addons-481299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-481299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-481299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	                "LowerDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc-init/diff:/var/lib/docker/overlay2/6da29ec63a7e3e82b22991cef36ed721f46b0e2f648634b12d0ad6b7225284d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-481299",
	                "Source": "/var/lib/docker/volumes/addons-481299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-481299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-481299",
	                "name.minikube.sigs.k8s.io": "addons-481299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f570c3a0d59e39fd3f8ba76488f6d4555992d509e8801ed716c6c97159646bf",
	            "SandboxKey": "/var/run/docker/netns/9f570c3a0d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-481299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:8f:15:9b:c7:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb7b19c2893329488850783d25e49e7f60d5e1cf75347f698d754ce5d4cf2448",
	                    "EndpointID": "51c67f4b258ce43a4b9996e94609bda24fc6b23ad970edc2979b013dbc0a113d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-481299",
	                        "7812e8ac5e4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-481299 -n addons-481299
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 logs -n 25: (1.137002847s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p download-docker-419204 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p download-docker-419204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p binary-mirror-415208 --alsologtostderr --binary-mirror http://127.0.0.1:32775 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p binary-mirror-415208                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ addons  │ enable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ addons  │ disable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ start   │ -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 12:02 UTC │
	│ addons  │ addons-481299 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ ip      │ addons-481299 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:07 UTC │
	│ addons  │ addons-481299 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:07 UTC │ 18 Oct 25 12:07 UTC │
	│ addons  │ addons-481299 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ addons  │ addons-481299 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ addons  │ enable headlamp -p addons-481299 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:44.391839  838384 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:44.391981  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.391993  838384 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:44.391997  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.392266  838384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 11:59:44.392787  838384 out.go:368] Setting JSON to false
	I1018 11:59:44.393655  838384 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13335,"bootTime":1760775450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:44.393734  838384 start.go:141] virtualization:  
	I1018 11:59:44.397036  838384 out.go:179] * [addons-481299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 11:59:44.400845  838384 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:59:44.400912  838384 notify.go:220] Checking for updates...
	I1018 11:59:44.406563  838384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:44.409444  838384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:44.412417  838384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:44.415444  838384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 11:59:44.418427  838384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:59:44.421615  838384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:44.443038  838384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:44.443164  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.504142  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.494831106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.504254  838384 docker.go:318] overlay module found
	I1018 11:59:44.507452  838384 out.go:179] * Using the docker driver based on user configuration
	I1018 11:59:44.510245  838384 start.go:305] selected driver: docker
	I1018 11:59:44.510268  838384 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:44.510293  838384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:59:44.511015  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.564605  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.555845429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.564766  838384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:44.564999  838384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:59:44.567965  838384 out.go:179] * Using Docker driver with root privileges
	I1018 11:59:44.570848  838384 cni.go:84] Creating CNI manager for ""
	I1018 11:59:44.570930  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:59:44.570945  838384 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:59:44.571024  838384 start.go:349] cluster config:
	{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1018 11:59:44.574127  838384 out.go:179] * Starting "addons-481299" primary control-plane node in "addons-481299" cluster
	I1018 11:59:44.576970  838384 cache.go:123] Beginning downloading kic base image for docker with docker
	I1018 11:59:44.579989  838384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:59:44.582852  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 11:59:44.582920  838384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1018 11:59:44.582938  838384 cache.go:58] Caching tarball of preloaded images
	I1018 11:59:44.582927  838384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:59:44.583037  838384 preload.go:233] Found /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 11:59:44.583047  838384 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 11:59:44.583380  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 11:59:44.583404  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json: {Name:mkb94853570c970a5649e27444200bcb46f3c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:59:44.598730  838384 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:44.598847  838384 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:59:44.598874  838384 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:59:44.598880  838384 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:59:44.598888  838384 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:59:44.598893  838384 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:00:04.269349  838384 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:00:04.269387  838384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:00:04.269433  838384 start.go:360] acquireMachinesLock for addons-481299: {Name:mkd3a8e0953bb87092640ae884afd157e18dcff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:00:04.270184  838384 start.go:364] duration metric: took 728.139µs to acquireMachinesLock for "addons-481299"
	I1018 12:00:04.270232  838384 start.go:93] Provisioning new machine with config: &{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:04.270339  838384 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:00:04.273733  838384 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:00:04.273967  838384 start.go:159] libmachine.API.Create for "addons-481299" (driver="docker")
	I1018 12:00:04.274006  838384 client.go:168] LocalClient.Create starting
	I1018 12:00:04.274125  838384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem
	I1018 12:00:04.457164  838384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem
	I1018 12:00:06.171545  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:00:06.188036  838384 cli_runner.go:211] docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:00:06.188147  838384 network_create.go:284] running [docker network inspect addons-481299] to gather additional debugging logs...
	I1018 12:00:06.188169  838384 cli_runner.go:164] Run: docker network inspect addons-481299
	W1018 12:00:06.204439  838384 cli_runner.go:211] docker network inspect addons-481299 returned with exit code 1
	I1018 12:00:06.204474  838384 network_create.go:287] error running [docker network inspect addons-481299]: docker network inspect addons-481299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-481299 not found
	I1018 12:00:06.204503  838384 network_create.go:289] output of [docker network inspect addons-481299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-481299 not found
	
	** /stderr **
	I1018 12:00:06.204601  838384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:06.221696  838384 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74f90}
	I1018 12:00:06.221747  838384 network_create.go:124] attempt to create docker network addons-481299 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:00:06.221806  838384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-481299 addons-481299
	I1018 12:00:06.276048  838384 network_create.go:108] docker network addons-481299 192.168.49.0/24 created
	I1018 12:00:06.276085  838384 kic.go:121] calculated static IP "192.168.49.2" for the "addons-481299" container
	I1018 12:00:06.276177  838384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:00:06.289979  838384 cli_runner.go:164] Run: docker volume create addons-481299 --label name.minikube.sigs.k8s.io=addons-481299 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:00:06.312048  838384 oci.go:103] Successfully created a docker volume addons-481299
	I1018 12:00:06.312139  838384 cli_runner.go:164] Run: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:00:08.424976  838384 cli_runner.go:217] Completed: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112781921s)
	I1018 12:00:08.425013  838384 oci.go:107] Successfully prepared a docker volume addons-481299
	I1018 12:00:08.425034  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:08.425054  838384 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:00:08.425128  838384 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:00:12.313130  838384 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.887944574s)
	I1018 12:00:12.313163  838384 kic.go:203] duration metric: took 3.888105963s to extract preloaded images to volume ...
	W1018 12:00:12.313309  838384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:00:12.313422  838384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:00:12.370823  838384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-481299 --name addons-481299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-481299 --network addons-481299 --ip 192.168.49.2 --volume addons-481299:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:00:12.674830  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Running}}
	I1018 12:00:12.693179  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:12.716439  838384 cli_runner.go:164] Run: docker exec addons-481299 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:00:12.770464  838384 oci.go:144] the created container "addons-481299" has a running status.
	I1018 12:00:12.770498  838384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa...
	I1018 12:00:13.113159  838384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:00:13.151031  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.187260  838384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:00:13.187279  838384 kic_runner.go:114] Args: [docker exec --privileged addons-481299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:00:13.265766  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.292626  838384 machine.go:93] provisionDockerMachine start ...
	I1018 12:00:13.292734  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:13.317367  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:13.317685  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:13.317701  838384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:00:13.319021  838384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:00:16.468726  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.468765  838384 ubuntu.go:182] provisioning hostname "addons-481299"
	I1018 12:00:16.468854  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.485891  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.486196  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.486212  838384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-481299 && echo "addons-481299" | sudo tee /etc/hostname
	I1018 12:00:16.646538  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.646612  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.664006  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.664309  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.664343  838384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-481299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-481299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-481299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:00:16.809369  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:00:16.809393  838384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-835760/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-835760/.minikube}
	I1018 12:00:16.809411  838384 ubuntu.go:190] setting up certificates
	I1018 12:00:16.809435  838384 provision.go:84] configureAuth start
	I1018 12:00:16.809492  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:16.826275  838384 provision.go:143] copyHostCerts
	I1018 12:00:16.826387  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/ca.pem (1078 bytes)
	I1018 12:00:16.826516  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/cert.pem (1123 bytes)
	I1018 12:00:16.826581  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/key.pem (1679 bytes)
	I1018 12:00:16.826641  838384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem org=jenkins.addons-481299 san=[127.0.0.1 192.168.49.2 addons-481299 localhost minikube]
	I1018 12:00:17.507919  838384 provision.go:177] copyRemoteCerts
	I1018 12:00:17.507999  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:00:17.508042  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.525367  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:17.628972  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:00:17.646764  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:00:17.664067  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:00:17.681849  838384 provision.go:87] duration metric: took 872.389064ms to configureAuth
	I1018 12:00:17.681878  838384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:00:17.682093  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:17.682153  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.699224  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.699534  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.699549  838384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1018 12:00:17.845632  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1018 12:00:17.845657  838384 ubuntu.go:71] root file system type: overlay
	I1018 12:00:17.845771  838384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1018 12:00:17.845845  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.863079  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.863381  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.863474  838384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1018 12:00:18.022880  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1018 12:00:18.022967  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.040408  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:18.040739  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:18.040762  838384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1018 12:00:18.977721  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:17:10.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-18 12:00:18.020798373 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1018 12:00:18.977763  838384 machine.go:96] duration metric: took 5.685111923s to provisionDockerMachine
	I1018 12:00:18.977774  838384 client.go:171] duration metric: took 14.703758303s to LocalClient.Create
	I1018 12:00:18.977788  838384 start.go:167] duration metric: took 14.703822977s to libmachine.API.Create "addons-481299"
	I1018 12:00:18.977798  838384 start.go:293] postStartSetup for "addons-481299" (driver="docker")
	I1018 12:00:18.977811  838384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:00:18.977881  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:00:18.977924  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.994172  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.097162  838384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:00:19.100393  838384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:00:19.100424  838384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:00:19.100436  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/addons for local assets ...
	I1018 12:00:19.100511  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/files for local assets ...
	I1018 12:00:19.100544  838384 start.go:296] duration metric: took 122.736875ms for postStartSetup
	I1018 12:00:19.100868  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.116943  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 12:00:19.117256  838384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:00:19.117306  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.133849  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.234008  838384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:00:19.238363  838384 start.go:128] duration metric: took 14.968007424s to createHost
	I1018 12:00:19.238389  838384 start.go:83] releasing machines lock for "addons-481299", held for 14.968185396s
	I1018 12:00:19.238463  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.254388  838384 ssh_runner.go:195] Run: cat /version.json
	I1018 12:00:19.254441  838384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:00:19.254505  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.254443  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.276799  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.279431  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.472177  838384 ssh_runner.go:195] Run: systemctl --version
	I1018 12:00:19.478376  838384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:00:19.482387  838384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:00:19.482484  838384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:00:19.508985  838384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:00:19.509011  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.509045  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.509137  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.523297  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:00:19.531952  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:00:19.540691  838384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:00:19.540763  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:00:19.549594  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.558008  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:00:19.566758  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.576635  838384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:00:19.584880  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:00:19.593685  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:00:19.602453  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:00:19.611252  838384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:00:19.618753  838384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:00:19.626251  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:19.751139  838384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:00:19.836762  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.836852  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.836930  838384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1018 12:00:19.852094  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.864977  838384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:00:19.898620  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.911221  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:00:19.924203  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.938348  838384 ssh_runner.go:195] Run: which cri-dockerd
	I1018 12:00:19.942139  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1018 12:00:19.949360  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1018 12:00:19.962074  838384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1018 12:00:20.083838  838384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1018 12:00:20.212913  838384 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1018 12:00:20.213052  838384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1018 12:00:20.227027  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:00:20.239819  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:20.356987  838384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:00:20.730788  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:00:20.745019  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1018 12:00:20.760946  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:20.776122  838384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1018 12:00:20.908216  838384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1018 12:00:21.026034  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.144947  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1018 12:00:21.160912  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1018 12:00:21.173334  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.292303  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1018 12:00:21.361374  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:21.374804  838384 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1018 12:00:21.374947  838384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1018 12:00:21.379225  838384 start.go:563] Will wait 60s for crictl version
	I1018 12:00:21.379335  838384 ssh_runner.go:195] Run: which crictl
	I1018 12:00:21.382801  838384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:00:21.411496  838384 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1018 12:00:21.411621  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.433381  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.460386  838384 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1018 12:00:21.460510  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:21.476212  838384 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:00:21.480421  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.490334  838384 kubeadm.go:883] updating cluster {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:00:21.490454  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:21.490519  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.509291  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.509313  838384 docker.go:621] Images already preloaded, skipping extraction
	I1018 12:00:21.509377  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.528526  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.528551  838384 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:00:21.528561  838384 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1018 12:00:21.528654  838384 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-481299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:00:21.528727  838384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:00:21.581422  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:21.581453  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:21.581477  838384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:00:21.581497  838384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-481299 NodeName:addons-481299 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:00:21.581620  838384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-481299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:00:21.581689  838384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:00:21.589319  838384 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:00:21.589438  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:00:21.597066  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 12:00:21.609634  838384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:00:21.622567  838384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:00:21.635067  838384 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:00:21.638568  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.647824  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.763836  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:21.780526  838384 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299 for IP: 192.168.49.2
	I1018 12:00:21.780549  838384 certs.go:195] generating shared ca certs ...
	I1018 12:00:21.780565  838384 certs.go:227] acquiring lock for ca certs: {Name:mk64b6deab019965f0a71e524905de5b1147e674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:21.780707  838384 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key
	I1018 12:00:22.104055  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt ...
	I1018 12:00:22.104088  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt: {Name:mk219bca6d4f923342f3157b83b20704e67715ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.104280  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key ...
	I1018 12:00:22.104292  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key: {Name:mk5866d61f42a017eec142fb2b809b20ce393fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.105023  838384 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key
	I1018 12:00:22.483966  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt ...
	I1018 12:00:22.483996  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt: {Name:mk2555147315d465bb6cbbe0ada0da38d0982383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484782  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key ...
	I1018 12:00:22.484802  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key: {Name:mke70c887973fdba4f391cbb188fbbcbcc67feb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484942  838384 certs.go:257] generating profile certs ...
	I1018 12:00:22.485032  838384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key
	I1018 12:00:22.485075  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt with IP's: []
	I1018 12:00:22.766617  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt ...
	I1018 12:00:22.766652  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: {Name:mk71fca78b75ad1dca519de9fb0da28787da9922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.767459  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key ...
	I1018 12:00:22.767475  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key: {Name:mkfc16398f479a2713696abea6afcf11b3f1f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.768111  838384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943
	I1018 12:00:22.768134  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:00:23.220784  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 ...
	I1018 12:00:23.220818  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943: {Name:mk6c4c501d6b0340a440642541437e4d17340f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221595  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 ...
	I1018 12:00:23.221617  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943: {Name:mkcf265b1908bbb8a1e1337f558863cea9c9422c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221706  838384 certs.go:382] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt
	I1018 12:00:23.221802  838384 certs.go:386] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key
	I1018 12:00:23.221883  838384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key
	I1018 12:00:23.221908  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt with IP's: []
	I1018 12:00:24.966646  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt ...
	I1018 12:00:24.966676  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt: {Name:mkee15b7b76ca75c0e94c93d9f26d926d62d1534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.967526  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key ...
	I1018 12:00:24.967544  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key: {Name:mk9dc72444d85c193fa5f2a33fb60909bdb001f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.968384  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:00:24.968428  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:00:24.968456  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:00:24.968483  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem (1679 bytes)
	I1018 12:00:24.969022  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:00:24.988248  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:00:25.007149  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:00:25.033918  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:00:25.066985  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:00:25.087585  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:00:25.108125  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:00:25.128839  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:00:25.148548  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:00:25.168256  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:00:25.184446  838384 ssh_runner.go:195] Run: openssl version
	I1018 12:00:25.191471  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:00:25.200726  838384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205014  838384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:00 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205128  838384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.246539  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:00:25.255568  838384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:00:25.259344  838384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:00:25.259420  838384 kubeadm.go:400] StartCluster: {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:25.259592  838384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:00:25.275896  838384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:00:25.283941  838384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:00:25.291784  838384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:00:25.291878  838384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:00:25.299853  838384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:00:25.299873  838384 kubeadm.go:157] found existing configuration files:
	
	I1018 12:00:25.299925  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:00:25.307671  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:00:25.307740  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:00:25.315009  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:00:25.322614  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:00:25.322749  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:00:25.330039  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.337975  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:00:25.338088  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.346012  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:00:25.353964  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:00:25.354066  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:00:25.361694  838384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:00:25.404873  838384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:00:25.405115  838384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:00:25.430950  838384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:00:25.431100  838384 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:00:25.431181  838384 kubeadm.go:318] OS: Linux
	I1018 12:00:25.431259  838384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:00:25.431348  838384 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:00:25.431436  838384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:00:25.431543  838384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:00:25.431629  838384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:00:25.431723  838384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:00:25.431825  838384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:00:25.431922  838384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:00:25.432024  838384 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:00:25.495079  838384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:00:25.495226  838384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:00:25.495348  838384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:00:25.521654  838384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:00:25.528010  838384 out.go:252]   - Generating certificates and keys ...
	I1018 12:00:25.528156  838384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:00:25.528258  838384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:00:26.879731  838384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:00:26.999718  838384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:00:27.674514  838384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:00:27.895095  838384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:00:28.404519  838384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:00:28.404866  838384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:28.991527  838384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:00:28.991878  838384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:29.164485  838384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:00:29.695021  838384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:00:30.175093  838384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:00:30.176279  838384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:00:30.242007  838384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:00:30.627655  838384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:00:31.915030  838384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:00:32.022022  838384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:00:32.099687  838384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:00:32.100473  838384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:00:32.105382  838384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:00:32.109132  838384 out.go:252]   - Booting up control plane ...
	I1018 12:00:32.109259  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:00:32.109346  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:00:32.109424  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:00:32.126476  838384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:00:32.126811  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:00:32.134355  838384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:00:32.134771  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:00:32.135020  838384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:00:32.269048  838384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:00:32.269176  838384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:00:34.270707  838384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001804064s
	I1018 12:00:34.274174  838384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:00:34.274278  838384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:00:34.274378  838384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:00:34.274465  838384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:00:39.829992  838384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.555227625s
	I1018 12:00:40.310124  838384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.035887374s
	I1018 12:00:41.776311  838384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501922731s
	I1018 12:00:41.797119  838384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:00:41.811164  838384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:00:41.829757  838384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:00:41.829974  838384 kubeadm.go:318] [mark-control-plane] Marking the node addons-481299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:00:41.842508  838384 kubeadm.go:318] [bootstrap-token] Using token: 28o1tp.74tu7k9l138h9i1r
	I1018 12:00:41.845472  838384 out.go:252]   - Configuring RBAC rules ...
	I1018 12:00:41.845619  838384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:00:41.851305  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:00:41.861731  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:00:41.865572  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:00:41.869978  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:00:41.874041  838384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:00:42.183860  838384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:00:42.609144  838384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:00:43.183159  838384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:00:43.184156  838384 kubeadm.go:318] 
	I1018 12:00:43.184228  838384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:00:43.184234  838384 kubeadm.go:318] 
	I1018 12:00:43.184315  838384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:00:43.184319  838384 kubeadm.go:318] 
	I1018 12:00:43.184351  838384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:00:43.184413  838384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:00:43.184465  838384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:00:43.184469  838384 kubeadm.go:318] 
	I1018 12:00:43.184525  838384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:00:43.184530  838384 kubeadm.go:318] 
	I1018 12:00:43.184581  838384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:00:43.184586  838384 kubeadm.go:318] 
	I1018 12:00:43.184640  838384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:00:43.184718  838384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:00:43.184789  838384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:00:43.184808  838384 kubeadm.go:318] 
	I1018 12:00:43.184897  838384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:00:43.184977  838384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:00:43.184982  838384 kubeadm.go:318] 
	I1018 12:00:43.185069  838384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185176  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a \
	I1018 12:00:43.185221  838384 kubeadm.go:318] 	--control-plane 
	I1018 12:00:43.185227  838384 kubeadm.go:318] 
	I1018 12:00:43.185315  838384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:00:43.185320  838384 kubeadm.go:318] 
	I1018 12:00:43.185405  838384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185510  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a 
	I1018 12:00:43.189253  838384 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:00:43.189490  838384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:00:43.189602  838384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:00:43.189623  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:43.189644  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:43.192807  838384 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 12:00:43.195736  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 12:00:43.204431  838384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 12:00:43.217468  838384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:00:43.217531  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.217601  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-481299 minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-481299 minikube.k8s.io/primary=true
	I1018 12:00:43.380129  838384 ops.go:34] apiserver oom_adj: -16
	I1018 12:00:43.380250  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.880662  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.380452  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.880320  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.381247  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.881072  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.380646  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.880456  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.380377  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.880798  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:48.013839  838384 kubeadm.go:1113] duration metric: took 4.796362824s to wait for elevateKubeSystemPrivileges
	I1018 12:00:48.013872  838384 kubeadm.go:402] duration metric: took 22.754456109s to StartCluster
	I1018 12:00:48.013892  838384 settings.go:142] acquiring lock: {Name:mk03f4782566e111488ff6c3364f55783ae35e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014045  838384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:00:48.014571  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/kubeconfig: {Name:mk9c5565e95c70d1e94713bb6739a4c61bf95b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014840  838384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:48.014969  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:00:48.015286  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.015326  838384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:00:48.015411  838384 addons.go:69] Setting yakd=true in profile "addons-481299"
	I1018 12:00:48.015430  838384 addons.go:238] Setting addon yakd=true in "addons-481299"
	I1018 12:00:48.015461  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.016044  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.016527  838384 addons.go:69] Setting metrics-server=true in profile "addons-481299"
	I1018 12:00:48.016553  838384 addons.go:238] Setting addon metrics-server=true in "addons-481299"
	I1018 12:00:48.016589  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.017051  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.017907  838384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.019860  838384 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-481299"
	I1018 12:00:48.019957  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.021475  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019545  838384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.022344  838384 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-481299"
	I1018 12:00:48.022416  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.019668  838384 addons.go:69] Setting cloud-spanner=true in profile "addons-481299"
	I1018 12:00:48.024324  838384 addons.go:238] Setting addon cloud-spanner=true in "addons-481299"
	I1018 12:00:48.024374  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.024861  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.025713  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019679  838384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-481299"
	I1018 12:00:48.030476  838384 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:00:48.030552  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.031113  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019684  838384 addons.go:69] Setting default-storageclass=true in profile "addons-481299"
	I1018 12:00:48.041523  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-481299"
	I1018 12:00:48.042108  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019687  838384 addons.go:69] Setting gcp-auth=true in profile "addons-481299"
	I1018 12:00:48.050391  838384 mustload.go:65] Loading cluster: addons-481299
	I1018 12:00:48.050608  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.050867  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019691  838384 addons.go:69] Setting ingress=true in profile "addons-481299"
	I1018 12:00:48.072186  838384 addons.go:238] Setting addon ingress=true in "addons-481299"
	I1018 12:00:48.072238  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.072812  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.104085  838384 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:00:48.019694  838384 addons.go:69] Setting ingress-dns=true in profile "addons-481299"
	I1018 12:00:48.105418  838384 addons.go:238] Setting addon ingress-dns=true in "addons-481299"
	I1018 12:00:48.105466  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.105984  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.112477  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:00:48.112591  838384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:00:48.112702  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019703  838384 addons.go:69] Setting inspektor-gadget=true in profile "addons-481299"
	I1018 12:00:48.128479  838384 addons.go:238] Setting addon inspektor-gadget=true in "addons-481299"
	I1018 12:00:48.128547  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.129027  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.137976  838384 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:00:48.142238  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:00:48.142360  838384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:00:48.142534  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019834  838384 out.go:179] * Verifying Kubernetes components...
	I1018 12:00:48.020098  838384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-481299"
	I1018 12:00:48.152243  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-481299"
	I1018 12:00:48.152617  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020105  838384 addons.go:69] Setting registry=true in profile "addons-481299"
	I1018 12:00:48.157306  838384 addons.go:238] Setting addon registry=true in "addons-481299"
	I1018 12:00:48.157349  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.157826  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020108  838384 addons.go:69] Setting registry-creds=true in profile "addons-481299"
	I1018 12:00:48.184275  838384 addons.go:238] Setting addon registry-creds=true in "addons-481299"
	I1018 12:00:48.184329  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.184843  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.201404  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:48.237763  838384 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:00:48.240788  838384 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:48.240812  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:00:48.240888  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020111  838384 addons.go:69] Setting storage-provisioner=true in profile "addons-481299"
	I1018 12:00:48.255281  838384 addons.go:238] Setting addon storage-provisioner=true in "addons-481299"
	I1018 12:00:48.255324  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.255806  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.263489  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.267247  838384 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:00:48.020115  838384 addons.go:69] Setting volumesnapshots=true in profile "addons-481299"
	I1018 12:00:48.287811  838384 addons.go:238] Setting addon volumesnapshots=true in "addons-481299"
	I1018 12:00:48.287854  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.288331  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.297150  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:00:48.305304  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:00:48.307825  838384 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:48.307845  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:00:48.307915  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020121  838384 addons.go:69] Setting volcano=true in profile "addons-481299"
	I1018 12:00:48.309165  838384 addons.go:238] Setting addon volcano=true in "addons-481299"
	I1018 12:00:48.309239  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.309684  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.323250  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.327844  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:00:48.334482  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:00:48.338284  838384 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:00:48.340947  838384 addons.go:238] Setting addon default-storageclass=true in "addons-481299"
	I1018 12:00:48.340997  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.341575  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.353583  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.359214  838384 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:48.359297  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:00:48.359408  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.368078  838384 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:00:48.390679  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:00:48.394064  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:00:48.396983  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:00:48.397244  838384 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:00:48.406053  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:00:48.407309  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.409466  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:48.409487  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:00:48.409583  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.411154  838384 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:48.411801  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:00:48.411868  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.436870  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:48.436945  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:00:48.437058  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.470112  838384 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:00:48.476747  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:00:48.411213  838384 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:00:48.483079  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.485480  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:00:48.485501  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:00:48.485576  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.486853  838384 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-481299"
	I1018 12:00:48.486939  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.487597  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.517521  838384 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:00:48.517545  838384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:00:48.517619  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.537436  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.538328  838384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:48.538343  838384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:00:48.538398  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.553217  838384 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:00:48.554968  838384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:00:48.556537  838384 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:00:48.556742  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:00:48.556825  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.570246  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:48.570267  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:00:48.570329  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.589916  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.591402  838384 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:00:48.591565  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:00:48.600425  838384 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:00:48.600484  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:00:48.600494  838384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:00:48.600575  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.621433  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.622509  838384 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:00:48.640797  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.660645  838384 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:48.660673  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:00:48.660741  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.709393  838384 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:00:48.713361  838384 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:00:48.715118  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.737398  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.754446  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.759608  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.767028  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:48.767051  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:00:48.767120  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.774154  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.778151  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.785350  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.790983  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.816526  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.817012  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	W1018 12:00:48.829777  838384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:00:48.829816  838384 retry.go:31] will retry after 218.102369ms: ssh: handshake failed: EOF
	I1018 12:00:48.929211  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:00:48.930600  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:49.692319  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:00:49.692342  838384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:00:49.778672  838384 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:00:49.778699  838384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:00:49.799026  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:00:49.799099  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:00:49.810733  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:49.821114  838384 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:49.821260  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:00:49.834872  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:49.938418  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:49.941990  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:49.960460  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:00:49.960486  838384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:00:49.976966  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:00:49.976992  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:00:49.982393  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:50.005856  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:50.006119  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:00:50.006139  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:00:50.030431  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:50.033442  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:50.078822  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:00:50.078897  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:00:50.223993  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:50.246008  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:00:50.246050  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:00:50.280883  838384 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.280909  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:00:50.285103  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:50.292971  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:00:50.292997  838384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:00:50.305000  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:00:50.305028  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:00:50.307406  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:00:50.307430  838384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:00:50.427676  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:00:50.427702  838384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:00:50.443185  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.443211  838384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:00:50.467032  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:50.523115  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.551023  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.551047  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:00:50.599104  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:00:50.599134  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:00:50.673841  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.676793  838384 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:50.676869  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:00:50.686555  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.800152  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:00:50.800216  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:00:50.933424  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:51.038226  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:00:51.038306  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:00:51.103206  838384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.173952226s)
	I1018 12:00:51.103234  838384 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:00:51.104274  838384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.173645704s)
	I1018 12:00:51.104963  838384 node_ready.go:35] waiting up to 6m0s for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108529  838384 node_ready.go:49] node "addons-481299" is "Ready"
	I1018 12:00:51.108556  838384 node_ready.go:38] duration metric: took 3.569435ms for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108574  838384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:00:51.108657  838384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:00:51.393745  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.582925006s)
	I1018 12:00:51.555496  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:00:51.555567  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:00:51.607401  838384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-481299" context rescaled to 1 replicas
	I1018 12:00:52.004594  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:00:52.004679  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:00:52.455938  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.620977988s)
	I1018 12:00:52.518962  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:00:52.519035  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:00:52.943868  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:00:52.943948  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:00:53.278688  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:53.278715  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:00:53.606823  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:54.485982  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547484935s)
	I1018 12:00:55.375228  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.392799926s)
	I1018 12:00:55.381718  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.439683188s)
	I1018 12:00:55.687452  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:00:55.687560  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:55.715052  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:56.874895  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:00:57.069070  838384 addons.go:238] Setting addon gcp-auth=true in "addons-481299"
	I1018 12:00:57.069155  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:57.069643  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:57.097607  838384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:00:57.097663  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:57.125594  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:57.466799  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.460903197s)
	I1018 12:00:57.466833  838384 addons.go:479] Verifying addon ingress=true in "addons-481299"
	I1018 12:00:57.466986  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436528408s)
	I1018 12:00:57.467006  838384 addons.go:479] Verifying addon registry=true in "addons-481299"
	I1018 12:00:57.467280  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.433810994s)
	I1018 12:00:57.467337  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.243320649s)
	I1018 12:00:57.467373  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.182252425s)
	I1018 12:00:57.471733  838384 out.go:179] * Verifying registry addon...
	I1018 12:00:57.471837  838384 out.go:179] * Verifying ingress addon...
	I1018 12:00:57.475972  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:00:57.476891  838384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:00:57.480952  838384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:00:57.480978  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:57.481343  838384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:00:57.481364  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987158  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987606  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505049  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505238  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102780  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102881  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.537809  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.538274  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.983982  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.989640  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:00.494679  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:00.494785  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.150283  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.151501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.410549  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.94347934s)
	I1018 12:01:01.410647  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.88750974s)
	W1018 12:01:01.410663  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410677  838384 retry.go:31] will retry after 201.547914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410735  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.736813896s)
	I1018 12:01:01.410746  838384 addons.go:479] Verifying addon metrics-server=true in "addons-481299"
	I1018 12:01:01.410781  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.724138069s)
	I1018 12:01:01.410951  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.477459185s)
	W1018 12:01:01.415024  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.415053  838384 retry.go:31] will retry after 281.160141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.410970  838384 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.302296364s)
	I1018 12:01:01.415078  838384 api_server.go:72] duration metric: took 13.40020826s to wait for apiserver process to appear ...
	I1018 12:01:01.415084  838384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:01:01.415099  838384 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:01:01.411139  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.804213387s)
	I1018 12:01:01.415446  838384 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:01:01.411161  838384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.313535397s)
	I1018 12:01:01.419793  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:01.419828  838384 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-481299 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:01:01.419793  838384 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:01:01.425137  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:01:01.428507  838384 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:01:01.431298  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:01:01.431327  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:01:01.501730  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:01:01.501758  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:01:01.508482  838384 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:01:01.513158  838384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:01:01.513201  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:01.539110  838384 api_server.go:141] control plane version: v1.34.1
	I1018 12:01:01.539156  838384 api_server.go:131] duration metric: took 124.06524ms to wait for apiserver health ...
	I1018 12:01:01.539166  838384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:01:01.613319  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:01.627688  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.627710  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:01:01.665079  838384 system_pods.go:59] 18 kube-system pods found
	I1018 12:01:01.665176  838384 system_pods.go:61] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.665300  838384 system_pods.go:61] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.665325  838384 system_pods.go:61] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.665360  838384 system_pods.go:61] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.665384  838384 system_pods.go:61] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.665405  838384 system_pods.go:61] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.665426  838384 system_pods.go:61] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.665459  838384 system_pods.go:61] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.665480  838384 system_pods.go:61] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.665500  838384 system_pods.go:61] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.665522  838384 system_pods.go:61] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.665545  838384 system_pods.go:61] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.665567  838384 system_pods.go:61] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.665592  838384 system_pods.go:61] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.665624  838384 system_pods.go:61] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.665649  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665673  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665696  838384 system_pods.go:61] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.665719  838384 system_pods.go:74] duration metric: took 126.545392ms to wait for pod list to return data ...
	I1018 12:01:01.665740  838384 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:01:01.666428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.666860  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.697178  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:01.740057  838384 default_sa.go:45] found service account: "default"
	I1018 12:01:01.740128  838384 default_sa.go:55] duration metric: took 74.366134ms for default service account to be created ...
	I1018 12:01:01.740153  838384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:01:01.750661  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.822559  838384 system_pods.go:86] 18 kube-system pods found
	I1018 12:01:01.822661  838384 system_pods.go:89] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.822687  838384 system_pods.go:89] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.822728  838384 system_pods.go:89] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.822757  838384 system_pods.go:89] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.822780  838384 system_pods.go:89] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.822805  838384 system_pods.go:89] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.822838  838384 system_pods.go:89] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.822870  838384 system_pods.go:89] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.822891  838384 system_pods.go:89] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.822912  838384 system_pods.go:89] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.822945  838384 system_pods.go:89] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.822968  838384 system_pods.go:89] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.822993  838384 system_pods.go:89] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.823021  838384 system_pods.go:89] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.823053  838384 system_pods.go:89] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.823081  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823104  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823125  838384 system_pods.go:89] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.823163  838384 system_pods.go:126] duration metric: took 82.986663ms to wait for k8s-apps to be running ...
	I1018 12:01:01.823190  838384 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:01:01.823297  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:01.937035  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.020374  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.020832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.430107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.481721  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.482135  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.929149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.979391  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.981564  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.429257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.480836  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.481398  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:03.928420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.981042  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.981494  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.429897  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.480820  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.482111  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:04.534347  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.920939875s)
	W1018 12:01:04.534529  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534551  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.783796022s)
	I1018 12:01:04.534584  838384 retry.go:31] will retry after 344.775954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534505  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.837082754s)
	I1018 12:01:04.534652  838384 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.71132558s)
	I1018 12:01:04.534680  838384 system_svc.go:56] duration metric: took 2.711486896s WaitForService to wait for kubelet
	I1018 12:01:04.534699  838384 kubeadm.go:586] duration metric: took 16.519828999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:01:04.534730  838384 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:01:04.537903  838384 addons.go:479] Verifying addon gcp-auth=true in "addons-481299"
	I1018 12:01:04.538525  838384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:01:04.538550  838384 node_conditions.go:123] node cpu capacity is 2
	I1018 12:01:04.538562  838384 node_conditions.go:105] duration metric: took 3.804704ms to run NodePressure ...
	I1018 12:01:04.538574  838384 start.go:241] waiting for startup goroutines ...
	I1018 12:01:04.541908  838384 out.go:179] * Verifying gcp-auth addon...
	I1018 12:01:04.545758  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:01:04.549219  838384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:01:04.549243  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:04.879563  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:04.929625  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.979948  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.980174  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.049663  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.429694  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:05.481708  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.482094  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:05.550159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.929062  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.026330  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146729751s)
	W1018 12:01:06.026368  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.026387  838384 retry.go:31] will retry after 530.445094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.029966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.030602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.049648  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.428850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.480186  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.480559  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.549270  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.557566  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:06.928852  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.981051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.981269  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.049233  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.428928  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.480325  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.480920  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:07.549301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.666703  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109094231s)
	W1018 12:01:07.666792  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.666842  838384 retry.go:31] will retry after 603.526174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.928884  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.980605  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.981081  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.049925  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.271551  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:08.428903  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.480126  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.481567  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.550087  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.929709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.982028  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.982437  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.049266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.386734  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.115102938s)
	W1018 12:01:09.386771  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.386790  838384 retry.go:31] will retry after 1.698280239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.429436  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.480507  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:09.480673  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.548565  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.929443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.981023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.981260  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.050256  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.428811  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.480068  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.480432  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:10.549416  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.929547  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.980558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.981814  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.049021  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.085333  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:11.429636  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.480969  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.481107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.549095  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.929997  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.981838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.982268  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.050261  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.218914  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133541217s)
	W1018 12:01:12.218954  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.218972  838384 retry.go:31] will retry after 1.345701913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.429128  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.480634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.480946  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.548958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.929428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.980178  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.980839  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.048870  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.429927  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.478960  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:13.480243  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.550057  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.565398  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:13.929670  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.980749  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.981541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.050280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.430257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.481781  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.482216  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:14.549060  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.674833  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.10935166s)
	W1018 12:01:14.674911  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.674935  838384 retry.go:31] will retry after 4.015074409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.929000  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.980481  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.980936  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.049371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.428339  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.481378  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:15.549296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.928645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.979966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.980109  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.049992  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.429300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.481508  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.481893  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:16.549251  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.929096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.979808  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.981556  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.050187  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.428204  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.488687  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.490476  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.549747  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.930135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.979752  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.980568  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.050894  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.428555  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:18.481131  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:18.483023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.691077  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:18.933149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.030226  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.030879  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.049132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.430174  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.481455  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.481882  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.549279  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.871749  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180617611s)
	W1018 12:01:19.871793  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.871814  838384 retry.go:31] will retry after 2.515978912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.929529  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.980077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.980426  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.052024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.429464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.479838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.480286  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:20.548968  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.928596  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.981305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.981761  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.050329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.430266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.481117  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:21.482392  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.549618  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.930023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.980648  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.981714  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.051678  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.387962  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:22.428709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.480821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:22.481479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.549163  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.929296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.980516  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.982615  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.049456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.430314  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.483574  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.483954  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.548803  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.611192  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223187854s)
	W1018 12:01:23.611326  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.611398  838384 retry.go:31] will retry after 3.829328757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.930083  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.980905  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.983008  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.050301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.429404  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.480464  838384 kapi.go:107] duration metric: took 27.00449331s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:01:24.481254  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.549300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.933613  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.980927  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.050132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.428170  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.480118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.549464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.930020  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.980129  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.049622  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.429371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.480502  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.549703  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.929758  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.981438  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.049865  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.428814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.441124  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:27.480812  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.549252  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.928712  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.981084  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.049705  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.430621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.481300  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.550023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.635231  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194068908s)
	W1018 12:01:28.635269  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.635306  838384 retry.go:31] will retry after 12.348738913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.929164  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.980051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.049370  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.429546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.549608  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.928941  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.980182  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.053764  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.428514  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.480543  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.549832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.929633  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.980682  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.048503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.429926  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.480706  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.549798  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.981501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.050135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.430225  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.480064  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.549107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.930034  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.980601  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.050009  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.428665  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.481555  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.549517  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.930160  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.980721  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.049111  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.430075  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.480599  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.549824  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.928660  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.981106  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.049429  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.429715  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.480901  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.549527  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.928904  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.979975  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.049469  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.429388  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.480507  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.549383  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.928634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.980506  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.049878  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.428319  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.480864  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.549709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.933484  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.981299  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.050356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.432420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.531156  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.549592  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.929614  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.980821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.048908  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.432072  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:39.483680  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.549850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.930073  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.042288  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.051491  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.429568  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.529694  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.592500  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.930099  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.981977  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.984969  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:41.048958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.428951  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.480793  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:41.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.935106  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.980130  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.050058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.376984  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.391917505s)
	W1018 12:01:42.377022  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.377040  838384 retry.go:31] will retry after 9.460130047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.430039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.481400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.549280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.930471  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.981285  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.050674  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.429650  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.550049  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.928854  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.980322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.049881  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.429125  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.480353  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.551159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.928689  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.980964  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.084556  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.428634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.529995  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.548921  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.930616  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.981362  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.050003  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.429102  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.480267  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.549814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.928994  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.993938  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.049617  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.429329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:47.481657  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.550619  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.930499  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.032192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.054316  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.440479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.542668  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.550977  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.929346  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.980748  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.048945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.449438  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.498203  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.551135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.928541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.980400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.055907  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.428586  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.481082  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.549349  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.929024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.980094  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.049047  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.431103  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.480475  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.549503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.837922  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:51.929098  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.985759  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.051760  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.429677  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.480886  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.549039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.930278  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.981010  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.049546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.257837  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.419863779s)
	W1018 12:01:53.257928  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.257962  838384 retry.go:31] will retry after 22.303674936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.428816  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.481322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.549342  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.931755  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.981558  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.049443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.430288  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.549297  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.928602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.980986  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.049290  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.429448  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.481581  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.549440  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.929401  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.980439  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.049558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.502856  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.503643  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.549688  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.981333  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.981570  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.049930  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.428413  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.480953  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.549886  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.929377  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.980404  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.049753  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.432621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.480756  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.549719  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.929420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.980375  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.049250  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.428644  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.480847  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.549883  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.928356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.980048  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.051534  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.430130  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.481277  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.549945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.928372  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.980700  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.049545  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.429844  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.551570  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.929456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.982282  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.049630  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.430524  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.481077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.549352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.928839  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.986842  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.049134  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.429457  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.480188  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.549085  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.928535  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.980769  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.048580  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.433304  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.480494  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.550096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.930036  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.980758  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.050726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.429347  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.480398  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.550148  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.928153  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.980059  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.048983  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.428741  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.481118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.548947  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.928168  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.980304  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.049241  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.428880  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.549892  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.981385  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.049855  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.430001  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.480903  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.548710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.932157  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.980207  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.052710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.458645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.502292  838384 kapi.go:107] duration metric: took 1m12.025397786s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:02:09.550352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.930139  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.049906  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.428576  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.550008  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.928058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.049512  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.430390  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.552726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.929915  838384 kapi.go:107] duration metric: took 1m10.504798048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:02:12.049079  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.550257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.050305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.549966  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.049761  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.549742  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.050822  838384 kapi.go:107] duration metric: took 1m10.505062422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:02:15.055403  838384 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-481299 cluster.
	I1018 12:02:15.059410  838384 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:02:15.062510  838384 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:02:15.562479  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:16.467612  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:16.467639  838384 retry.go:31] will retry after 33.846916648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:50.315580  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:51.158051  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:51.158143  838384 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:02:51.163303  838384 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, nvidia-device-plugin, registry-creds, ingress-dns, volcano, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:02:51.167339  838384 addons.go:514] duration metric: took 2m3.15084241s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher nvidia-device-plugin registry-creds ingress-dns volcano metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:02:51.167414  838384 start.go:246] waiting for cluster config update ...
	I1018 12:02:51.167438  838384 start.go:255] writing updated cluster config ...
	I1018 12:02:51.167767  838384 ssh_runner.go:195] Run: rm -f paused
	I1018 12:02:51.172083  838384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:51.175895  838384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.182054  838384 pod_ready.go:94] pod "coredns-66bc5c9577-rz5sw" is "Ready"
	I1018 12:02:51.182084  838384 pod_ready.go:86] duration metric: took 6.16488ms for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.184338  838384 pod_ready.go:83] waiting for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.189308  838384 pod_ready.go:94] pod "etcd-addons-481299" is "Ready"
	I1018 12:02:51.189338  838384 pod_ready.go:86] duration metric: took 4.974639ms for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.191936  838384 pod_ready.go:83] waiting for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.196757  838384 pod_ready.go:94] pod "kube-apiserver-addons-481299" is "Ready"
	I1018 12:02:51.196779  838384 pod_ready.go:86] duration metric: took 4.818772ms for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.199224  838384 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.576040  838384 pod_ready.go:94] pod "kube-controller-manager-addons-481299" is "Ready"
	I1018 12:02:51.576065  838384 pod_ready.go:86] duration metric: took 376.813094ms for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.776236  838384 pod_ready.go:83] waiting for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.176297  838384 pod_ready.go:94] pod "kube-proxy-9r8gt" is "Ready"
	I1018 12:02:52.176326  838384 pod_ready.go:86] duration metric: took 400.06131ms for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.376266  838384 pod_ready.go:83] waiting for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776669  838384 pod_ready.go:94] pod "kube-scheduler-addons-481299" is "Ready"
	I1018 12:02:52.776749  838384 pod_ready.go:86] duration metric: took 400.404692ms for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776777  838384 pod_ready.go:40] duration metric: took 1.604654564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:52.836011  838384 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:02:52.839584  838384 out.go:179] * Done! kubectl is now configured to use "addons-481299" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 18 12:09:38 addons-481299 dockerd[1124]: time="2025-10-18T12:09:38.643879071Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:09:38 addons-481299 dockerd[1124]: time="2025-10-18T12:09:38.750580748Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:02 addons-481299 dockerd[1124]: time="2025-10-18T12:10:02.633707160Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:10:02 addons-481299 dockerd[1124]: time="2025-10-18T12:10:02.736591036Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:06 addons-481299 dockerd[1124]: time="2025-10-18T12:10:06.803231269Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:53 addons-481299 dockerd[1124]: time="2025-10-18T12:10:53.626885605Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:10:53 addons-481299 dockerd[1124]: time="2025-10-18T12:10:53.717983549Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:11:24 addons-481299 dockerd[1124]: time="2025-10-18T12:11:24.018286377Z" level=info msg="ignoring event" container=a3a466883749eb99a51ce5a71e37bc07f607ee3a8af95ff22135d555287c3eca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:11:54 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/107cd0999ccf87c48f9e724bd424a2e567e6d0af6f99d64ebf5984e860e08ebd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:11:54 addons-481299 dockerd[1124]: time="2025-10-18T12:11:54.590527302Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:11:54 addons-481299 dockerd[1124]: time="2025-10-18T12:11:54.790784425Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:11:54 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:11:54Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Oct 18 12:12:07 addons-481299 dockerd[1124]: time="2025-10-18T12:12:07.617628514Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:12:07 addons-481299 dockerd[1124]: time="2025-10-18T12:12:07.735046551Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:12:15 addons-481299 dockerd[1124]: time="2025-10-18T12:12:15.785519490Z" level=info msg="ignoring event" container=107cd0999ccf87c48f9e724bd424a2e567e6d0af6f99d64ebf5984e860e08ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:12:40 addons-481299 dockerd[1124]: time="2025-10-18T12:12:40.734935832Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456
	Oct 18 12:12:40 addons-481299 dockerd[1124]: time="2025-10-18T12:12:40.761721802Z" level=info msg="ignoring event" container=3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:12:40 addons-481299 dockerd[1124]: time="2025-10-18T12:12:40.890434315Z" level=info msg="ignoring event" container=37d7d553854918ef6cab40f799a015c3ceb55a9d90d422e3a873552a3176c17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:12:50 addons-481299 dockerd[1124]: time="2025-10-18T12:12:50.799946218Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:12:59 addons-481299 dockerd[1124]: time="2025-10-18T12:12:59.781469929Z" level=info msg="ignoring event" container=b7a4f459688f41adafab372362c0adf0528f3b7772534c63020d191c11f71323 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:12:59 addons-481299 dockerd[1124]: time="2025-10-18T12:12:59.941469528Z" level=info msg="ignoring event" container=e32407548d49acd0737061163fee10c376171dc6bb80752648a7a4ba6bff2d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:13:00 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"cloud-spanner-emulator-86bd5cbb97-dzsz8_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e32407548d49acd0737061163fee10c376171dc6bb80752648a7a4ba6bff2d9a\""
	Oct 18 12:13:01 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d1ea064adff3406d7e1c1494b8f731c9d7622a2b9bc0a75238b8dc163ad8f6ec/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:13:01 addons-481299 dockerd[1124]: time="2025-10-18T12:13:01.518099622Z" level=warning msg="reference for unknown type: " digest="sha256:c56f63701f270921f5db163324e2bce03ea0b20234dc76787901a5d27ad7fa71" remote="ghcr.io/headlamp-k8s/headlamp@sha256:c56f63701f270921f5db163324e2bce03ea0b20234dc76787901a5d27ad7fa71"
	Oct 18 12:13:04 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:13:04Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.36.0@sha256:c56f63701f270921f5db163324e2bce03ea0b20234dc76787901a5d27ad7fa71: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:c56f63701f270921f5db163324e2bce03ea0b20234dc76787901a5d27ad7fa71"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	fc8c2ebab821e       ghcr.io/headlamp-k8s/headlamp@sha256:c56f63701f270921f5db163324e2bce03ea0b20234dc76787901a5d27ad7fa71                                        4 seconds ago       Running             headlamp                                 0                   d1ea064adff34       headlamp-6945c6f4d-cbpft                    headlamp
	589ac0aebe47c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   f3d0a3d6da576       busybox                                     default
	4d48c965fce74       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	8a05108e7b43d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             11 minutes ago      Running             controller                               0                   431b9cc69faad       ingress-nginx-controller-675c5ddd98-285md   ingress-nginx
	f83395435c560       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	872fdd27e58a8       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	bfe8ec14dbc5b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	fece67ae97aa8       9a80c0c8eb61c                                                                                                                                11 minutes ago      Exited              patch                                    2                   1d4d4fefc0e58       ingress-nginx-admission-patch-2wvtk         ingress-nginx
	3c690158312df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	9d6a1790759ab       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	59681c39be97d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   63f35a2a9593c       csi-hostpath-resizer-0                      kube-system
	e3a81c28c5300       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   0a72279b138c8       csi-hostpath-attacher-0                     kube-system
	53b6871ca3d73       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   11 minutes ago      Exited              create                                   0                   a56a89e8a43cc       ingress-nginx-admission-create-xtv6g        ingress-nginx
	81902e2733f2b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd                            11 minutes ago      Running             gadget                                   0                   680d644fbcd5c       gadget-fg7d4                                gadget
	3f96c18e36d8d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   0609845efda87       snapshot-controller-7d9fbc56b8-xfp9p        kube-system
	e1a780c238dba       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   f7f6e990ad6a6       snapshot-controller-7d9fbc56b8-24v9j        kube-system
	b202ac5239a46       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        11 minutes ago      Running             metrics-server                           0                   866ba49820966       metrics-server-85b7d694d7-7dj5n             kube-system
	eab72ef0e77c8       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         11 minutes ago      Running             minikube-ingress-dns                     0                   50079659ba367       kube-ingress-dns-minikube                   kube-system
	25c9372900a26       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   4004c0456a8a4       storage-provisioner                         kube-system
	5797fb80a206f       138784d87c9c5                                                                                                                                12 minutes ago      Running             coredns                                  0                   5fe43b09fa9ef       coredns-66bc5c9577-rz5sw                    kube-system
	dae56d7c71c17       05baa95f5142d                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   ecfef3e21c7ae       kube-proxy-9r8gt                            kube-system
	dba8e74f2f998       b5f57ec6b9867                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   5c4a90f92f416       kube-scheduler-addons-481299                kube-system
	c52ddb45ce912       7eb2c6ff0c5a7                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   2332636f8cc54       kube-controller-manager-addons-481299       kube-system
	9523d2b24f1a6       a1894772a478e                                                                                                                                12 minutes ago      Running             etcd                                     0                   ea5e934575f93       etcd-addons-481299                          kube-system
	48f480a306b71       43911e833d64d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   ddf248b652d44       kube-apiserver-addons-481299                kube-system
	
	
	==> controller_ingress [8a05108e7b43] <==
	I1018 12:02:08.924425       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1018 12:02:09.031156       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1018 12:02:09.043091       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1018 12:02:09.058313       6 nginx.go:273] "Starting NGINX Ingress controller"
	I1018 12:02:09.082853       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"42fc76e3-0c78-4800-9165-c304c60bade9", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1018 12:02:09.082897       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"bad76739-6da4-44ee-ad7e-7821403e700d", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1018 12:02:09.082907       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"46559a9e-dfe6-4342-b6c4-a2e0d79ffec0", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1018 12:02:10.260707       6 nginx.go:319] "Starting NGINX process"
	I1018 12:02:10.260918       6 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1018 12:02:10.261398       6 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1018 12:02:10.261848       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:02:10.271851       6 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1018 12:02:10.273337       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-285md"
	I1018 12:02:10.280403       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	I1018 12:02:10.330867       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:02:10.330933       6 controller.go:240] "Initial sync, sleeping for 1 second"
	I1018 12:02:10.331249       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1018 12:02:10.381659       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	  Release:       v1.13.3
	  Build:         93851f05e61d99eea49140c9be73499a3cb92ccc
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [5797fb80a206] <==
	[INFO] 10.244.0.7:54253 - 32829 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002752352s
	[INFO] 10.244.0.7:54253 - 48434 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000139218s
	[INFO] 10.244.0.7:54253 - 23824 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103517s
	[INFO] 10.244.0.7:35646 - 48715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276574s
	[INFO] 10.244.0.7:35646 - 52325 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112625s
	[INFO] 10.244.0.7:45943 - 64938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090488s
	[INFO] 10.244.0.7:45943 - 65152 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099734s
	[INFO] 10.244.0.7:33837 - 21640 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105166s
	[INFO] 10.244.0.7:33837 - 21813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009532s
	[INFO] 10.244.0.7:58089 - 39529 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295941s
	[INFO] 10.244.0.7:58089 - 39709 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001006773s
	[INFO] 10.244.0.7:49666 - 228 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115406s
	[INFO] 10.244.0.7:49666 - 392 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093999s
	[INFO] 10.244.0.26:58341 - 57714 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271404s
	[INFO] 10.244.0.26:33384 - 49094 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121051s
	[INFO] 10.244.0.26:51350 - 39772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197935s
	[INFO] 10.244.0.26:51818 - 1006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135993s
	[INFO] 10.244.0.26:60342 - 22606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138184s
	[INFO] 10.244.0.26:55393 - 64846 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151477s
	[INFO] 10.244.0.26:33581 - 50204 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064442s
	[INFO] 10.244.0.26:57107 - 22259 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002808697s
	[INFO] 10.244.0.26:34046 - 55453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002593408s
	[INFO] 10.244.0.26:55019 - 57919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003283041s
	[INFO] 10.244.0.29:34729 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227268s
	[INFO] 10.244.0.29:39903 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120166s
	
	
	==> describe nodes <==
	Name:               addons-481299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-481299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-481299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-481299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-481299"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:00:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-481299
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:13:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-481299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9d0b013c-4a29-4459-8491-dd5d81ae22da
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-fg7d4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  headlamp                    headlamp-6945c6f4d-cbpft                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-285md    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-rz5sw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-c4hl2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-481299                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-481299                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-481299        200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9r8gt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-481299                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-85b7d694d7-7dj5n              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 registry-creds-764b6fb674-mrx54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-24v9j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-xfp9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node addons-481299 event: Registered Node addons-481299 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-481299 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	[Oct18 10:38] overlayfs: idmapped layers are currently not supported
	[ +31.744006] overlayfs: idmapped layers are currently not supported
	[Oct18 10:40] overlayfs: idmapped layers are currently not supported
	[  +7.638461] overlayfs: idmapped layers are currently not supported
	[Oct18 10:42] overlayfs: idmapped layers are currently not supported
	[ +14.986088] overlayfs: idmapped layers are currently not supported
	[Oct18 10:44] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:59] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9523d2b24f1a] <==
	{"level":"warn","ts":"2025-10-18T12:00:38.390619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.407793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.439357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.481909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.499910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.531023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.698131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.110540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.132268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.020980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.073818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.097695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.122888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.166204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.187897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.301315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.414291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.428689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.446079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.463288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.498849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.513965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:10:36.778316Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2025-10-18T12:10:36.842558Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1761,"took":"61.836891ms","hash":840814350,"current-db-size-bytes":10379264,"current-db-size":"10 MB","current-db-size-in-use-bytes":6549504,"current-db-size-in-use":"6.5 MB"}
	{"level":"info","ts":"2025-10-18T12:10:36.842623Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":840814350,"revision":1761,"compact-revision":-1}
	
	
	==> kernel <==
	 12:13:09 up  3:55,  0 user,  load average: 0.40, 0.92, 2.12
	Linux addons-481299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [48f480a306b7] <==
	E1018 12:06:14.432289       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	E1018 12:06:14.437579       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	E1018 12:06:14.471263       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	I1018 12:06:14.538453       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:14.605852       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:14.732689       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:14.759022       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:14.794923       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:15.131412       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1018 12:06:15.349953       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	W1018 12:06:15.357404       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1018 12:06:15.539391       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:15.717485       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1018 12:06:15.727618       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1018 12:06:15.826396       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1018 12:06:15.848126       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1018 12:06:15.872978       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1018 12:06:15.954825       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1018 12:06:16.718440       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1018 12:06:17.004308       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1018 12:06:35.331248       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50972: use of closed network connection
	E1018 12:06:35.593799       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50998: use of closed network connection
	E1018 12:06:35.767189       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51012: use of closed network connection
	I1018 12:10:39.868349       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:13:00.804704       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.109.209"}
	
	
	==> kube-controller-manager [c52ddb45ce91] <==
	E1018 12:12:06.648481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:09.188245       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:09.189757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:16.992470       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:12:18.817874       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:18.818939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:26.492058       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:26.493374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:29.604099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:29.605439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:31.992809       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:12:34.057761       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:34.058860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:43.948451       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:43.949785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:44.471937       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:44.473141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:46.992824       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1018 12:12:48.820966       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:48.822314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:50.168726       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:50.170107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:59.271643       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:59.272686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:01.993747       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [dae56d7c71c1] <==
	I1018 12:00:49.514489       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:00:49.626939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:00:49.727608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:00:49.727648       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:00:49.727722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:00:49.750914       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:00:49.750964       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:00:49.754697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:00:49.754988       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:00:49.755003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:49.756307       1 config.go:200] "Starting service config controller"
	I1018 12:00:49.756319       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:00:49.756333       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:00:49.756337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:00:49.756361       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:00:49.756365       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:00:49.757732       1 config.go:309] "Starting node config controller"
	I1018 12:00:49.757742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:00:49.757747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:00:49.856694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:00:49.856705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:00:49.856722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dba8e74f2f99] <==
	I1018 12:00:40.296381       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:40.299837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:00:40.300872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:00:40.314385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:00:40.314937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:00:40.315179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:00:40.315409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:00:40.315594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:00:40.321949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:00:40.322190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:00:40.322439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:00:40.323011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:00:40.323347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:00:40.323536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:00:40.324267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:00:40.324450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:00:40.324637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:00:40.324822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:00:40.325293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:00:40.325523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:00:40.325545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:00:40.325592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 12:00:41.902177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:12:35 addons-481299 kubelet[2235]: E1018 12:12:35.576398    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.125665    2235 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-config-volume\") pod \"6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6\" (UID: \"6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6\") "
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.125728    2235 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6c7\" (UniqueName: \"kubernetes.io/projected/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-kube-api-access-gm6c7\") pod \"6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6\" (UID: \"6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6\") "
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.126436    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6" (UID: "6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.128145    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-kube-api-access-gm6c7" (OuterVolumeSpecName: "kube-api-access-gm6c7") pod "6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6" (UID: "6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6"). InnerVolumeSpecName "kube-api-access-gm6c7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.226835    2235 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-config-volume\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.226881    2235 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gm6c7\" (UniqueName: \"kubernetes.io/projected/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6-kube-api-access-gm6c7\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.578199    2235 scope.go:117] "RemoveContainer" containerID="3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456"
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.607762    2235 scope.go:117] "RemoveContainer" containerID="3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456"
	Oct 18 12:12:41 addons-481299 kubelet[2235]: E1018 12:12:41.608746    2235 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456" containerID="3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456"
	Oct 18 12:12:41 addons-481299 kubelet[2235]: I1018 12:12:41.608791    2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456"} err="failed to get container status \"3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3315bc7f0cf33e820d2931eb80f0f980b9e6c555c85baf18447095e37f6b6456"
	Oct 18 12:12:42 addons-481299 kubelet[2235]: I1018 12:12:42.601388    2235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6" path="/var/lib/kubelet/pods/6ca3c8ea-05c2-47b0-bfea-d4b13497bcf6/volumes"
	Oct 18 12:12:46 addons-481299 kubelet[2235]: I1018 12:12:46.576014    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:12:50 addons-481299 kubelet[2235]: E1018 12:12:50.803336    2235 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 12:12:50 addons-481299 kubelet[2235]: E1018 12:12:50.803393    2235 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 12:12:50 addons-481299 kubelet[2235]: E1018 12:12:50.803475    2235 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(cd2f2cff-a3be-4a44-84b1-1c62af359bbe): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:12:50 addons-481299 kubelet[2235]: E1018 12:12:50.803507    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:13:00 addons-481299 kubelet[2235]: I1018 12:13:00.199624    2235 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vhv2\" (UniqueName: \"kubernetes.io/projected/ffd4842c-09b4-4973-9866-1ce0d5dbd809-kube-api-access-4vhv2\") pod \"ffd4842c-09b4-4973-9866-1ce0d5dbd809\" (UID: \"ffd4842c-09b4-4973-9866-1ce0d5dbd809\") "
	Oct 18 12:13:00 addons-481299 kubelet[2235]: I1018 12:13:00.229468    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd4842c-09b4-4973-9866-1ce0d5dbd809-kube-api-access-4vhv2" (OuterVolumeSpecName: "kube-api-access-4vhv2") pod "ffd4842c-09b4-4973-9866-1ce0d5dbd809" (UID: "ffd4842c-09b4-4973-9866-1ce0d5dbd809"). InnerVolumeSpecName "kube-api-access-4vhv2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 12:13:00 addons-481299 kubelet[2235]: I1018 12:13:00.302009    2235 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vhv2\" (UniqueName: \"kubernetes.io/projected/ffd4842c-09b4-4973-9866-1ce0d5dbd809-kube-api-access-4vhv2\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:13:01 addons-481299 kubelet[2235]: I1018 12:13:01.009522    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d5l\" (UniqueName: \"kubernetes.io/projected/a08dc186-c164-455b-b70c-643689b2406d-kube-api-access-58d5l\") pod \"headlamp-6945c6f4d-cbpft\" (UID: \"a08dc186-c164-455b-b70c-643689b2406d\") " pod="headlamp/headlamp-6945c6f4d-cbpft"
	Oct 18 12:13:01 addons-481299 kubelet[2235]: I1018 12:13:01.117784    2235 scope.go:117] "RemoveContainer" containerID="b7a4f459688f41adafab372362c0adf0528f3b7772534c63020d191c11f71323"
	Oct 18 12:13:02 addons-481299 kubelet[2235]: I1018 12:13:02.599153    2235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd4842c-09b4-4973-9866-1ce0d5dbd809" path="/var/lib/kubelet/pods/ffd4842c-09b4-4973-9866-1ce0d5dbd809/volumes"
	Oct 18 12:13:04 addons-481299 kubelet[2235]: E1018 12:13:04.576563    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:13:05 addons-481299 kubelet[2235]: I1018 12:13:05.255216    2235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-6945c6f4d-cbpft" podStartSLOduration=2.11412549 podStartE2EDuration="5.255196112s" podCreationTimestamp="2025-10-18 12:13:00 +0000 UTC" firstStartedPulling="2025-10-18 12:13:01.461645157 +0000 UTC m=+739.022621100" lastFinishedPulling="2025-10-18 12:13:04.602715778 +0000 UTC m=+742.163691722" observedRunningTime="2025-10-18 12:13:05.23752353 +0000 UTC m=+742.798499499" watchObservedRunningTime="2025-10-18 12:13:05.255196112 +0000 UTC m=+742.816172055"
	
	
	==> storage-provisioner [25c9372900a2] <==
	W1018 12:12:43.901996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:45.904958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:45.913392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:47.917074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:47.921762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:49.924971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:49.931932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:51.936193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:51.941028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:53.944719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:53.951471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:55.954659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:55.959097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:57.962729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:57.968202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:59.974367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:59.982332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:01.985879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:01.996341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:04.000446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:04.011101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:06.014467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:06.019712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:08.023944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:08.033348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
helpers_test.go:269: (dbg) Run:  kubectl --context addons-481299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54: exit status 1 (126.037348ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-481299/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:07:07 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chd6h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-chd6h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-481299
	  Warning  Failed     5m23s                kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m4s (x4 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m4s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    61s (x21 over 6m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     61s (x21 over 6m2s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9qfd (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-q9qfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xtv6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wvtk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-mrx54" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.069888285s)
--- FAIL: TestAddons/parallel/CSI (381.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-481299 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-481299 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-481299 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.304µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-481299
helpers_test.go:243: (dbg) docker inspect addons-481299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	        "Created": "2025-10-18T12:00:12.386585691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:00:12.450161174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/hosts",
	        "LogPath": "/var/lib/docker/containers/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8/7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8-json.log",
	        "Name": "/addons-481299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-481299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-481299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7812e8ac5e4a387c9bbaaa2096c54fe2184756a688a7adbdb6111ae0359f40c8",
	                "LowerDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc-init/diff:/var/lib/docker/overlay2/6da29ec63a7e3e82b22991cef36ed721f46b0e2f648634b12d0ad6b7225284d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/192703e6ab970abbb9827f5509632f84db0ebe73b49e182c2b6ca1e117bf77cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-481299",
	                "Source": "/var/lib/docker/volumes/addons-481299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-481299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-481299",
	                "name.minikube.sigs.k8s.io": "addons-481299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f570c3a0d59e39fd3f8ba76488f6d4555992d509e8801ed716c6c97159646bf",
	            "SandboxKey": "/var/run/docker/netns/9f570c3a0d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-481299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:8f:15:9b:c7:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb7b19c2893329488850783d25e49e7f60d5e1cf75347f698d754ce5d4cf2448",
	                    "EndpointID": "51c67f4b258ce43a4b9996e94609bda24fc6b23ad970edc2979b013dbc0a113d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-481299",
	                        "7812e8ac5e4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-481299 -n addons-481299
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 logs -n 25: (1.183911276s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-809137   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-223621                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-223621   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p download-docker-419204 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p download-docker-419204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-419204 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ --download-only -p binary-mirror-415208 --alsologtostderr --binary-mirror http://127.0.0.1:32775 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ -p binary-mirror-415208                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-415208   │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ addons  │ enable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ addons  │ disable dashboard -p addons-481299                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ start   │ -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 12:02 UTC │
	│ addons  │ addons-481299 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ ip      │ addons-481299 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-481299 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:06 UTC │ 18 Oct 25 12:07 UTC │
	│ addons  │ addons-481299 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-481299          │ jenkins │ v1.37.0 │ 18 Oct 25 12:07 UTC │ 18 Oct 25 12:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:44.391839  838384 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:44.391981  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.391993  838384 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:44.391997  838384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:44.392266  838384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 11:59:44.392787  838384 out.go:368] Setting JSON to false
	I1018 11:59:44.393655  838384 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13335,"bootTime":1760775450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:44.393734  838384 start.go:141] virtualization:  
	I1018 11:59:44.397036  838384 out.go:179] * [addons-481299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 11:59:44.400845  838384 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:59:44.400912  838384 notify.go:220] Checking for updates...
	I1018 11:59:44.406563  838384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:44.409444  838384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:44.412417  838384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:44.415444  838384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 11:59:44.418427  838384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:59:44.421615  838384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:44.443038  838384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:44.443164  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.504142  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.494831106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.504254  838384 docker.go:318] overlay module found
	I1018 11:59:44.507452  838384 out.go:179] * Using the docker driver based on user configuration
	I1018 11:59:44.510245  838384 start.go:305] selected driver: docker
	I1018 11:59:44.510268  838384 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:44.510293  838384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:59:44.511015  838384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:44.564605  838384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 11:59:44.555845429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:44.564766  838384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:44.564999  838384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:59:44.567965  838384 out.go:179] * Using Docker driver with root privileges
	I1018 11:59:44.570848  838384 cni.go:84] Creating CNI manager for ""
	I1018 11:59:44.570930  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:59:44.570945  838384 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:59:44.571024  838384 start.go:349] cluster config:
	{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1018 11:59:44.574127  838384 out.go:179] * Starting "addons-481299" primary control-plane node in "addons-481299" cluster
	I1018 11:59:44.576970  838384 cache.go:123] Beginning downloading kic base image for docker with docker
	I1018 11:59:44.579989  838384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:59:44.582852  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 11:59:44.582920  838384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1018 11:59:44.582938  838384 cache.go:58] Caching tarball of preloaded images
	I1018 11:59:44.582927  838384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:59:44.583037  838384 preload.go:233] Found /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 11:59:44.583047  838384 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1018 11:59:44.583380  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 11:59:44.583404  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json: {Name:mkb94853570c970a5649e27444200bcb46f3c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:59:44.598730  838384 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:44.598847  838384 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:59:44.598874  838384 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:59:44.598880  838384 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:59:44.598888  838384 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:59:44.598893  838384 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:00:04.269349  838384 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:00:04.269387  838384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:00:04.269433  838384 start.go:360] acquireMachinesLock for addons-481299: {Name:mkd3a8e0953bb87092640ae884afd157e18dcff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:00:04.270184  838384 start.go:364] duration metric: took 728.139µs to acquireMachinesLock for "addons-481299"
	I1018 12:00:04.270232  838384 start.go:93] Provisioning new machine with config: &{Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:04.270339  838384 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:00:04.273733  838384 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:00:04.273967  838384 start.go:159] libmachine.API.Create for "addons-481299" (driver="docker")
	I1018 12:00:04.274006  838384 client.go:168] LocalClient.Create starting
	I1018 12:00:04.274125  838384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem
	I1018 12:00:04.457164  838384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem
	I1018 12:00:06.171545  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:00:06.188036  838384 cli_runner.go:211] docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:00:06.188147  838384 network_create.go:284] running [docker network inspect addons-481299] to gather additional debugging logs...
	I1018 12:00:06.188169  838384 cli_runner.go:164] Run: docker network inspect addons-481299
	W1018 12:00:06.204439  838384 cli_runner.go:211] docker network inspect addons-481299 returned with exit code 1
	I1018 12:00:06.204474  838384 network_create.go:287] error running [docker network inspect addons-481299]: docker network inspect addons-481299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-481299 not found
	I1018 12:00:06.204503  838384 network_create.go:289] output of [docker network inspect addons-481299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-481299 not found
	
	** /stderr **
	I1018 12:00:06.204601  838384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:06.221696  838384 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74f90}
	I1018 12:00:06.221747  838384 network_create.go:124] attempt to create docker network addons-481299 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:00:06.221806  838384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-481299 addons-481299
	I1018 12:00:06.276048  838384 network_create.go:108] docker network addons-481299 192.168.49.0/24 created
	I1018 12:00:06.276085  838384 kic.go:121] calculated static IP "192.168.49.2" for the "addons-481299" container
	I1018 12:00:06.276177  838384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:00:06.289979  838384 cli_runner.go:164] Run: docker volume create addons-481299 --label name.minikube.sigs.k8s.io=addons-481299 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:00:06.312048  838384 oci.go:103] Successfully created a docker volume addons-481299
	I1018 12:00:06.312139  838384 cli_runner.go:164] Run: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:00:08.424976  838384 cli_runner.go:217] Completed: docker run --rm --name addons-481299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --entrypoint /usr/bin/test -v addons-481299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112781921s)
	I1018 12:00:08.425013  838384 oci.go:107] Successfully prepared a docker volume addons-481299
	I1018 12:00:08.425034  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:08.425054  838384 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:00:08.425128  838384 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:00:12.313130  838384 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-481299:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.887944574s)
	I1018 12:00:12.313163  838384 kic.go:203] duration metric: took 3.888105963s to extract preloaded images to volume ...
	W1018 12:00:12.313309  838384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:00:12.313422  838384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:00:12.370823  838384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-481299 --name addons-481299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-481299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-481299 --network addons-481299 --ip 192.168.49.2 --volume addons-481299:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:00:12.674830  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Running}}
	I1018 12:00:12.693179  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:12.716439  838384 cli_runner.go:164] Run: docker exec addons-481299 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:00:12.770464  838384 oci.go:144] the created container "addons-481299" has a running status.
	I1018 12:00:12.770498  838384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa...
	I1018 12:00:13.113159  838384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:00:13.151031  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.187260  838384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:00:13.187279  838384 kic_runner.go:114] Args: [docker exec --privileged addons-481299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:00:13.265766  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:13.292626  838384 machine.go:93] provisionDockerMachine start ...
	I1018 12:00:13.292734  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:13.317367  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:13.317685  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:13.317701  838384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:00:13.319021  838384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:00:16.468726  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.468765  838384 ubuntu.go:182] provisioning hostname "addons-481299"
	I1018 12:00:16.468854  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.485891  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.486196  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.486212  838384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-481299 && echo "addons-481299" | sudo tee /etc/hostname
	I1018 12:00:16.646538  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-481299
	
	I1018 12:00:16.646612  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:16.664006  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:16.664309  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:16.664343  838384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-481299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-481299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-481299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:00:16.809369  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:00:16.809393  838384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-835760/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-835760/.minikube}
	I1018 12:00:16.809411  838384 ubuntu.go:190] setting up certificates
	I1018 12:00:16.809435  838384 provision.go:84] configureAuth start
	I1018 12:00:16.809492  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:16.826275  838384 provision.go:143] copyHostCerts
	I1018 12:00:16.826387  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/ca.pem (1078 bytes)
	I1018 12:00:16.826516  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/cert.pem (1123 bytes)
	I1018 12:00:16.826581  838384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-835760/.minikube/key.pem (1679 bytes)
	I1018 12:00:16.826641  838384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem org=jenkins.addons-481299 san=[127.0.0.1 192.168.49.2 addons-481299 localhost minikube]
	I1018 12:00:17.507919  838384 provision.go:177] copyRemoteCerts
	I1018 12:00:17.507999  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:00:17.508042  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.525367  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:17.628972  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:00:17.646764  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:00:17.664067  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:00:17.681849  838384 provision.go:87] duration metric: took 872.389064ms to configureAuth
	I1018 12:00:17.681878  838384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:00:17.682093  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:17.682153  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.699224  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.699534  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.699549  838384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1018 12:00:17.845632  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1018 12:00:17.845657  838384 ubuntu.go:71] root file system type: overlay
	I1018 12:00:17.845771  838384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1018 12:00:17.845845  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:17.863079  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:17.863381  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:17.863474  838384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1018 12:00:18.022880  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1018 12:00:18.022967  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.040408  838384 main.go:141] libmachine: Using SSH client type: native
	I1018 12:00:18.040739  838384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1018 12:00:18.040762  838384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1018 12:00:18.977721  838384 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:17:10.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-18 12:00:18.020798373 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1018 12:00:18.977763  838384 machine.go:96] duration metric: took 5.685111923s to provisionDockerMachine
	I1018 12:00:18.977774  838384 client.go:171] duration metric: took 14.703758303s to LocalClient.Create
	I1018 12:00:18.977788  838384 start.go:167] duration metric: took 14.703822977s to libmachine.API.Create "addons-481299"
	I1018 12:00:18.977798  838384 start.go:293] postStartSetup for "addons-481299" (driver="docker")
	I1018 12:00:18.977811  838384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:00:18.977881  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:00:18.977924  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:18.994172  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.097162  838384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:00:19.100393  838384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:00:19.100424  838384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:00:19.100436  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/addons for local assets ...
	I1018 12:00:19.100511  838384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-835760/.minikube/files for local assets ...
	I1018 12:00:19.100544  838384 start.go:296] duration metric: took 122.736875ms for postStartSetup
	I1018 12:00:19.100868  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.116943  838384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/config.json ...
	I1018 12:00:19.117256  838384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:00:19.117306  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.133849  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.234008  838384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:00:19.238363  838384 start.go:128] duration metric: took 14.968007424s to createHost
	I1018 12:00:19.238389  838384 start.go:83] releasing machines lock for "addons-481299", held for 14.968185396s
	I1018 12:00:19.238463  838384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-481299
	I1018 12:00:19.254388  838384 ssh_runner.go:195] Run: cat /version.json
	I1018 12:00:19.254441  838384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:00:19.254505  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.254443  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:19.276799  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.279431  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:19.472177  838384 ssh_runner.go:195] Run: systemctl --version
	I1018 12:00:19.478376  838384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:00:19.482387  838384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:00:19.482484  838384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:00:19.508985  838384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:00:19.509011  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.509045  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.509137  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.523297  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:00:19.531952  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:00:19.540691  838384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:00:19.540763  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:00:19.549594  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.558008  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:00:19.566758  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:00:19.576635  838384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:00:19.584880  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:00:19.593685  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:00:19.602453  838384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:00:19.611252  838384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:00:19.618753  838384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:00:19.626251  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:19.751139  838384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:00:19.836762  838384 start.go:495] detecting cgroup driver to use...
	I1018 12:00:19.836852  838384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:00:19.836930  838384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1018 12:00:19.852094  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.864977  838384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:00:19.898620  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:00:19.911221  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:00:19.924203  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:00:19.938348  838384 ssh_runner.go:195] Run: which cri-dockerd
	I1018 12:00:19.942139  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1018 12:00:19.949360  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1018 12:00:19.962074  838384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1018 12:00:20.083838  838384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1018 12:00:20.212913  838384 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1018 12:00:20.213052  838384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1018 12:00:20.227027  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1018 12:00:20.239819  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:20.356987  838384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1018 12:00:20.730788  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:00:20.745019  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1018 12:00:20.760946  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:20.776122  838384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1018 12:00:20.908216  838384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1018 12:00:21.026034  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.144947  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1018 12:00:21.160912  838384 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1018 12:00:21.173334  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.292303  838384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1018 12:00:21.361374  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1018 12:00:21.374804  838384 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1018 12:00:21.374947  838384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1018 12:00:21.379225  838384 start.go:563] Will wait 60s for crictl version
	I1018 12:00:21.379335  838384 ssh_runner.go:195] Run: which crictl
	I1018 12:00:21.382801  838384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:00:21.411496  838384 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1018 12:00:21.411621  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.433381  838384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1018 12:00:21.460386  838384 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1018 12:00:21.460510  838384 cli_runner.go:164] Run: docker network inspect addons-481299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:00:21.476212  838384 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:00:21.480421  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.490334  838384 kubeadm.go:883] updating cluster {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:00:21.490454  838384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1018 12:00:21.490519  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.509291  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.509313  838384 docker.go:621] Images already preloaded, skipping extraction
	I1018 12:00:21.509377  838384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1018 12:00:21.528526  838384 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1018 12:00:21.528551  838384 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:00:21.528561  838384 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1018 12:00:21.528654  838384 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-481299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:00:21.528727  838384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1018 12:00:21.581422  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:21.581453  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:21.581477  838384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:00:21.581497  838384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-481299 NodeName:addons-481299 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:00:21.581620  838384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-481299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:00:21.581689  838384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:00:21.589319  838384 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:00:21.589438  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:00:21.597066  838384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 12:00:21.609634  838384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:00:21.622567  838384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:00:21.635067  838384 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:00:21.638568  838384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:00:21.647824  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:21.763836  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:21.780526  838384 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299 for IP: 192.168.49.2
	I1018 12:00:21.780549  838384 certs.go:195] generating shared ca certs ...
	I1018 12:00:21.780565  838384 certs.go:227] acquiring lock for ca certs: {Name:mk64b6deab019965f0a71e524905de5b1147e674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:21.780707  838384 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key
	I1018 12:00:22.104055  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt ...
	I1018 12:00:22.104088  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt: {Name:mk219bca6d4f923342f3157b83b20704e67715ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.104280  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key ...
	I1018 12:00:22.104292  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key: {Name:mk5866d61f42a017eec142fb2b809b20ce393fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.105023  838384 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key
	I1018 12:00:22.483966  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt ...
	I1018 12:00:22.483996  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt: {Name:mk2555147315d465bb6cbbe0ada0da38d0982383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484782  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key ...
	I1018 12:00:22.484802  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key: {Name:mke70c887973fdba4f391cbb188fbbcbcc67feb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.484942  838384 certs.go:257] generating profile certs ...
	I1018 12:00:22.485032  838384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key
	I1018 12:00:22.485075  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt with IP's: []
	I1018 12:00:22.766617  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt ...
	I1018 12:00:22.766652  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: {Name:mk71fca78b75ad1dca519de9fb0da28787da9922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.767459  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key ...
	I1018 12:00:22.767475  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.key: {Name:mkfc16398f479a2713696abea6afcf11b3f1f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:22.768111  838384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943
	I1018 12:00:22.768134  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:00:23.220784  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 ...
	I1018 12:00:23.220818  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943: {Name:mk6c4c501d6b0340a440642541437e4d17340f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221595  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 ...
	I1018 12:00:23.221617  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943: {Name:mkcf265b1908bbb8a1e1337f558863cea9c9422c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:23.221706  838384 certs.go:382] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt
	I1018 12:00:23.221802  838384 certs.go:386] copying /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key.a02c8943 -> /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key
	I1018 12:00:23.221883  838384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key
	I1018 12:00:23.221908  838384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt with IP's: []
	I1018 12:00:24.966646  838384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt ...
	I1018 12:00:24.966676  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt: {Name:mkee15b7b76ca75c0e94c93d9f26d926d62d1534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.967526  838384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key ...
	I1018 12:00:24.967544  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key: {Name:mk9dc72444d85c193fa5f2a33fb60909bdb001f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:24.968384  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:00:24.968428  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:00:24.968456  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:00:24.968483  838384 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-835760/.minikube/certs/key.pem (1679 bytes)
	I1018 12:00:24.969022  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:00:24.988248  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 12:00:25.007149  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:00:25.033918  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:00:25.066985  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:00:25.087585  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:00:25.108125  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:00:25.128839  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:00:25.148548  838384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:00:25.168256  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:00:25.184446  838384 ssh_runner.go:195] Run: openssl version
	I1018 12:00:25.191471  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:00:25.200726  838384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205014  838384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:00 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.205128  838384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:00:25.246539  838384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:00:25.255568  838384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:00:25.259344  838384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:00:25.259420  838384 kubeadm.go:400] StartCluster: {Name:addons-481299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-481299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:25.259592  838384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1018 12:00:25.275896  838384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:00:25.283941  838384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:00:25.291784  838384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:00:25.291878  838384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:00:25.299853  838384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:00:25.299873  838384 kubeadm.go:157] found existing configuration files:
	
	I1018 12:00:25.299925  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:00:25.307671  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:00:25.307740  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:00:25.315009  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:00:25.322614  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:00:25.322749  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:00:25.330039  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.337975  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:00:25.338088  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:00:25.346012  838384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:00:25.353964  838384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:00:25.354066  838384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:00:25.361694  838384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:00:25.404873  838384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:00:25.405115  838384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:00:25.430950  838384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:00:25.431100  838384 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:00:25.431181  838384 kubeadm.go:318] OS: Linux
	I1018 12:00:25.431259  838384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:00:25.431348  838384 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:00:25.431436  838384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:00:25.431543  838384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:00:25.431629  838384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:00:25.431723  838384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:00:25.431825  838384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:00:25.431922  838384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:00:25.432024  838384 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:00:25.495079  838384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:00:25.495226  838384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:00:25.495348  838384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:00:25.521654  838384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:00:25.528010  838384 out.go:252]   - Generating certificates and keys ...
	I1018 12:00:25.528156  838384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:00:25.528258  838384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:00:26.879731  838384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:00:26.999718  838384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:00:27.674514  838384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:00:27.895095  838384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:00:28.404519  838384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:00:28.404866  838384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:28.991527  838384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:00:28.991878  838384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-481299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:00:29.164485  838384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:00:29.695021  838384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:00:30.175093  838384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:00:30.176279  838384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:00:30.242007  838384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:00:30.627655  838384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:00:31.915030  838384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:00:32.022022  838384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:00:32.099687  838384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:00:32.100473  838384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:00:32.105382  838384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:00:32.109132  838384 out.go:252]   - Booting up control plane ...
	I1018 12:00:32.109259  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:00:32.109346  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:00:32.109424  838384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:00:32.126476  838384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:00:32.126811  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:00:32.134355  838384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:00:32.134771  838384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:00:32.135020  838384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:00:32.269048  838384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:00:32.269176  838384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:00:34.270707  838384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001804064s
	I1018 12:00:34.274174  838384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:00:34.274278  838384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:00:34.274378  838384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:00:34.274465  838384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:00:39.829992  838384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.555227625s
	I1018 12:00:40.310124  838384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.035887374s
	I1018 12:00:41.776311  838384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501922731s
	I1018 12:00:41.797119  838384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:00:41.811164  838384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:00:41.829757  838384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:00:41.829974  838384 kubeadm.go:318] [mark-control-plane] Marking the node addons-481299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:00:41.842508  838384 kubeadm.go:318] [bootstrap-token] Using token: 28o1tp.74tu7k9l138h9i1r
	I1018 12:00:41.845472  838384 out.go:252]   - Configuring RBAC rules ...
	I1018 12:00:41.845619  838384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:00:41.851305  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:00:41.861731  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:00:41.865572  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:00:41.869978  838384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:00:41.874041  838384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:00:42.183860  838384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:00:42.609144  838384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:00:43.183159  838384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:00:43.184156  838384 kubeadm.go:318] 
	I1018 12:00:43.184228  838384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:00:43.184234  838384 kubeadm.go:318] 
	I1018 12:00:43.184315  838384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:00:43.184319  838384 kubeadm.go:318] 
	I1018 12:00:43.184351  838384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:00:43.184413  838384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:00:43.184465  838384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:00:43.184469  838384 kubeadm.go:318] 
	I1018 12:00:43.184525  838384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:00:43.184530  838384 kubeadm.go:318] 
	I1018 12:00:43.184581  838384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:00:43.184586  838384 kubeadm.go:318] 
	I1018 12:00:43.184640  838384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:00:43.184718  838384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:00:43.184789  838384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:00:43.184808  838384 kubeadm.go:318] 
	I1018 12:00:43.184897  838384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:00:43.184977  838384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:00:43.184982  838384 kubeadm.go:318] 
	I1018 12:00:43.185069  838384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185176  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a \
	I1018 12:00:43.185221  838384 kubeadm.go:318] 	--control-plane 
	I1018 12:00:43.185227  838384 kubeadm.go:318] 
	I1018 12:00:43.185315  838384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:00:43.185320  838384 kubeadm.go:318] 
	I1018 12:00:43.185405  838384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 28o1tp.74tu7k9l138h9i1r \
	I1018 12:00:43.185510  838384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f1eff663fdc496d18c86d281f03a23ec2067e2ea575ba5de908f9f832f1d41a 
	I1018 12:00:43.189253  838384 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:00:43.189490  838384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:00:43.189602  838384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:00:43.189623  838384 cni.go:84] Creating CNI manager for ""
	I1018 12:00:43.189644  838384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:00:43.192807  838384 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 12:00:43.195736  838384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 12:00:43.204431  838384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 12:00:43.217468  838384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:00:43.217531  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.217601  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-481299 minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-481299 minikube.k8s.io/primary=true
	I1018 12:00:43.380129  838384 ops.go:34] apiserver oom_adj: -16
	I1018 12:00:43.380250  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:43.880662  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.380452  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:44.880320  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.381247  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:45.881072  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.380646  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:46.880456  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.380377  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:47.880798  838384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:00:48.013839  838384 kubeadm.go:1113] duration metric: took 4.796362824s to wait for elevateKubeSystemPrivileges
	I1018 12:00:48.013872  838384 kubeadm.go:402] duration metric: took 22.754456109s to StartCluster
	I1018 12:00:48.013892  838384 settings.go:142] acquiring lock: {Name:mk03f4782566e111488ff6c3364f55783ae35e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014045  838384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:00:48.014571  838384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/kubeconfig: {Name:mk9c5565e95c70d1e94713bb6739a4c61bf95b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:48.014840  838384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1018 12:00:48.014969  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:00:48.015286  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.015326  838384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:00:48.015411  838384 addons.go:69] Setting yakd=true in profile "addons-481299"
	I1018 12:00:48.015430  838384 addons.go:238] Setting addon yakd=true in "addons-481299"
	I1018 12:00:48.015461  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.016044  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.016527  838384 addons.go:69] Setting metrics-server=true in profile "addons-481299"
	I1018 12:00:48.016553  838384 addons.go:238] Setting addon metrics-server=true in "addons-481299"
	I1018 12:00:48.016589  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.017051  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.017907  838384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.019860  838384 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-481299"
	I1018 12:00:48.019957  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.021475  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019545  838384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-481299"
	I1018 12:00:48.022344  838384 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-481299"
	I1018 12:00:48.022416  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.019668  838384 addons.go:69] Setting cloud-spanner=true in profile "addons-481299"
	I1018 12:00:48.024324  838384 addons.go:238] Setting addon cloud-spanner=true in "addons-481299"
	I1018 12:00:48.024374  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.024861  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.025713  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019679  838384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-481299"
	I1018 12:00:48.030476  838384 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:00:48.030552  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.031113  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019684  838384 addons.go:69] Setting default-storageclass=true in profile "addons-481299"
	I1018 12:00:48.041523  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-481299"
	I1018 12:00:48.042108  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019687  838384 addons.go:69] Setting gcp-auth=true in profile "addons-481299"
	I1018 12:00:48.050391  838384 mustload.go:65] Loading cluster: addons-481299
	I1018 12:00:48.050608  838384 config.go:182] Loaded profile config "addons-481299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:00:48.050867  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.019691  838384 addons.go:69] Setting ingress=true in profile "addons-481299"
	I1018 12:00:48.072186  838384 addons.go:238] Setting addon ingress=true in "addons-481299"
	I1018 12:00:48.072238  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.072812  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.104085  838384 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:00:48.019694  838384 addons.go:69] Setting ingress-dns=true in profile "addons-481299"
	I1018 12:00:48.105418  838384 addons.go:238] Setting addon ingress-dns=true in "addons-481299"
	I1018 12:00:48.105466  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.105984  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.112477  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:00:48.112591  838384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:00:48.112702  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019703  838384 addons.go:69] Setting inspektor-gadget=true in profile "addons-481299"
	I1018 12:00:48.128479  838384 addons.go:238] Setting addon inspektor-gadget=true in "addons-481299"
	I1018 12:00:48.128547  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.129027  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.137976  838384 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:00:48.142238  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:00:48.142360  838384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:00:48.142534  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.019834  838384 out.go:179] * Verifying Kubernetes components...
	I1018 12:00:48.020098  838384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-481299"
	I1018 12:00:48.152243  838384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-481299"
	I1018 12:00:48.152617  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020105  838384 addons.go:69] Setting registry=true in profile "addons-481299"
	I1018 12:00:48.157306  838384 addons.go:238] Setting addon registry=true in "addons-481299"
	I1018 12:00:48.157349  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.157826  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.020108  838384 addons.go:69] Setting registry-creds=true in profile "addons-481299"
	I1018 12:00:48.184275  838384 addons.go:238] Setting addon registry-creds=true in "addons-481299"
	I1018 12:00:48.184329  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.184843  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.201404  838384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:00:48.237763  838384 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:00:48.240788  838384 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:48.240812  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:00:48.240888  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020111  838384 addons.go:69] Setting storage-provisioner=true in profile "addons-481299"
	I1018 12:00:48.255281  838384 addons.go:238] Setting addon storage-provisioner=true in "addons-481299"
	I1018 12:00:48.255324  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.255806  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.263489  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.267247  838384 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:00:48.020115  838384 addons.go:69] Setting volumesnapshots=true in profile "addons-481299"
	I1018 12:00:48.287811  838384 addons.go:238] Setting addon volumesnapshots=true in "addons-481299"
	I1018 12:00:48.287854  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.288331  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.297150  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:00:48.305304  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:00:48.307825  838384 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:48.307845  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:00:48.307915  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.020121  838384 addons.go:69] Setting volcano=true in profile "addons-481299"
	I1018 12:00:48.309165  838384 addons.go:238] Setting addon volcano=true in "addons-481299"
	I1018 12:00:48.309239  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.309684  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.323250  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.327844  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:00:48.334482  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:00:48.338284  838384 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:00:48.340947  838384 addons.go:238] Setting addon default-storageclass=true in "addons-481299"
	I1018 12:00:48.340997  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.341575  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.353583  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.359214  838384 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:48.359297  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:00:48.359408  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.368078  838384 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:00:48.390679  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:00:48.394064  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:00:48.396983  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:00:48.397244  838384 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:00:48.406053  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:00:48.407309  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:00:48.409466  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:48.409487  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:00:48.409583  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.411154  838384 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:48.411801  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:00:48.411868  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.436870  838384 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:48.436945  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:00:48.437058  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.470112  838384 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:00:48.476747  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:00:48.411213  838384 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:00:48.483079  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.485480  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:00:48.485501  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:00:48.485576  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.486853  838384 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-481299"
	I1018 12:00:48.486939  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:48.487597  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:48.517521  838384 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:00:48.517545  838384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:00:48.517619  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.537436  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.538328  838384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:48.538343  838384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:00:48.538398  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.553217  838384 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:00:48.554968  838384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:00:48.556537  838384 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:00:48.556742  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:00:48.556825  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.570246  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:48.570267  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:00:48.570329  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.589916  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.591402  838384 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:00:48.591565  838384 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:00:48.600425  838384 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:00:48.600484  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:00:48.600494  838384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:00:48.600575  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.621433  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.622509  838384 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:00:48.640797  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.660645  838384 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:48.660673  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:00:48.660741  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.709393  838384 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:00:48.713361  838384 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:00:48.715118  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.737398  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.754446  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.759608  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.767028  838384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:48.767051  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:00:48.767120  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:48.774154  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.778151  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.785350  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.790983  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.816526  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:48.817012  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	W1018 12:00:48.829777  838384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:00:48.829816  838384 retry.go:31] will retry after 218.102369ms: ssh: handshake failed: EOF
	I1018 12:00:48.929211  838384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:00:48.930600  838384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:00:49.692319  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:00:49.692342  838384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:00:49.778672  838384 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:00:49.778699  838384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:00:49.799026  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:00:49.799099  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:00:49.810733  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:00:49.821114  838384 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:49.821260  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:00:49.834872  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:00:49.938418  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:00:49.941990  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:00:49.960460  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:00:49.960486  838384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:00:49.976966  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:00:49.976992  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:00:49.982393  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:00:50.005856  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:00:50.006119  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:00:50.006139  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:00:50.030431  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:00:50.033442  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:00:50.078822  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:00:50.078897  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:00:50.223993  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:00:50.246008  838384 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:00:50.246050  838384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:00:50.280883  838384 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.280909  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:00:50.285103  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:00:50.292971  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:00:50.292997  838384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:00:50.305000  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:00:50.305028  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:00:50.307406  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:00:50.307430  838384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:00:50.427676  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:00:50.427702  838384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:00:50.443185  838384 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.443211  838384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:00:50.467032  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:00:50.523115  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:00:50.551023  838384 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.551047  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:00:50.599104  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:00:50.599134  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:00:50.673841  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:00:50.676793  838384 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:50.676869  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:00:50.686555  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:00:50.800152  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:00:50.800216  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:00:50.933424  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:00:51.038226  838384 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:00:51.038306  838384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:00:51.103206  838384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.173952226s)
	I1018 12:00:51.103234  838384 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:00:51.104274  838384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.173645704s)
	I1018 12:00:51.104963  838384 node_ready.go:35] waiting up to 6m0s for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108529  838384 node_ready.go:49] node "addons-481299" is "Ready"
	I1018 12:00:51.108556  838384 node_ready.go:38] duration metric: took 3.569435ms for node "addons-481299" to be "Ready" ...
	I1018 12:00:51.108574  838384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:00:51.108657  838384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:00:51.393745  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.582925006s)
	I1018 12:00:51.555496  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:00:51.555567  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:00:51.607401  838384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-481299" context rescaled to 1 replicas
	I1018 12:00:52.004594  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:00:52.004679  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:00:52.455938  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.620977988s)
	I1018 12:00:52.518962  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:00:52.519035  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:00:52.943868  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:00:52.943948  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:00:53.278688  838384 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:53.278715  838384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:00:53.606823  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:00:54.485982  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547484935s)
	I1018 12:00:55.375228  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.392799926s)
	I1018 12:00:55.381718  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.439683188s)
	I1018 12:00:55.687452  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:00:55.687560  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:55.715052  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:56.874895  838384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:00:57.069070  838384 addons.go:238] Setting addon gcp-auth=true in "addons-481299"
	I1018 12:00:57.069155  838384 host.go:66] Checking if "addons-481299" exists ...
	I1018 12:00:57.069643  838384 cli_runner.go:164] Run: docker container inspect addons-481299 --format={{.State.Status}}
	I1018 12:00:57.097607  838384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:00:57.097663  838384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-481299
	I1018 12:00:57.125594  838384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/addons-481299/id_rsa Username:docker}
	I1018 12:00:57.466799  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.460903197s)
	I1018 12:00:57.466833  838384 addons.go:479] Verifying addon ingress=true in "addons-481299"
	I1018 12:00:57.466986  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436528408s)
	I1018 12:00:57.467006  838384 addons.go:479] Verifying addon registry=true in "addons-481299"
	I1018 12:00:57.467280  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.433810994s)
	I1018 12:00:57.467337  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.243320649s)
	I1018 12:00:57.467373  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.182252425s)
	I1018 12:00:57.471733  838384 out.go:179] * Verifying registry addon...
	I1018 12:00:57.471837  838384 out.go:179] * Verifying ingress addon...
	I1018 12:00:57.475972  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:00:57.476891  838384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:00:57.480952  838384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:00:57.480978  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:57.481343  838384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:00:57.481364  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987158  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:57.987606  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505049  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:58.505238  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102780  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.102881  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.537809  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:00:59.538274  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.983982  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:00:59.989640  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:00.494679  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:00.494785  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.150283  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.151501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.410549  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.94347934s)
	I1018 12:01:01.410647  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.88750974s)
	W1018 12:01:01.410663  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410677  838384 retry.go:31] will retry after 201.547914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:01.410735  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.736813896s)
	I1018 12:01:01.410746  838384 addons.go:479] Verifying addon metrics-server=true in "addons-481299"
	I1018 12:01:01.410781  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.724138069s)
	I1018 12:01:01.410951  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.477459185s)
	W1018 12:01:01.415024  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.415053  838384 retry.go:31] will retry after 281.160141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:01:01.410970  838384 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.302296364s)
	I1018 12:01:01.415078  838384 api_server.go:72] duration metric: took 13.40020826s to wait for apiserver process to appear ...
	I1018 12:01:01.415084  838384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:01:01.415099  838384 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:01:01.411139  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.804213387s)
	I1018 12:01:01.415446  838384 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-481299"
	I1018 12:01:01.411161  838384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.313535397s)
	I1018 12:01:01.419793  838384 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:01.419828  838384 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-481299 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:01:01.419793  838384 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:01:01.425137  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:01:01.428507  838384 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:01:01.431298  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:01:01.431327  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:01:01.501730  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:01:01.501758  838384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:01:01.508482  838384 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:01:01.513158  838384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:01:01.513201  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:01.539110  838384 api_server.go:141] control plane version: v1.34.1
	I1018 12:01:01.539156  838384 api_server.go:131] duration metric: took 124.06524ms to wait for apiserver health ...
	I1018 12:01:01.539166  838384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:01:01.613319  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:01.627688  838384 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.627710  838384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:01:01.665079  838384 system_pods.go:59] 18 kube-system pods found
	I1018 12:01:01.665176  838384 system_pods.go:61] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.665300  838384 system_pods.go:61] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.665325  838384 system_pods.go:61] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.665360  838384 system_pods.go:61] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.665384  838384 system_pods.go:61] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.665405  838384 system_pods.go:61] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.665426  838384 system_pods.go:61] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.665459  838384 system_pods.go:61] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.665480  838384 system_pods.go:61] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.665500  838384 system_pods.go:61] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.665522  838384 system_pods.go:61] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.665545  838384 system_pods.go:61] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.665567  838384 system_pods.go:61] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.665592  838384 system_pods.go:61] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.665624  838384 system_pods.go:61] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.665649  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665673  838384 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.665696  838384 system_pods.go:61] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.665719  838384 system_pods.go:74] duration metric: took 126.545392ms to wait for pod list to return data ...
	I1018 12:01:01.665740  838384 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:01:01.666428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:01.666860  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:01.697178  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:01.740057  838384 default_sa.go:45] found service account: "default"
	I1018 12:01:01.740128  838384 default_sa.go:55] duration metric: took 74.366134ms for default service account to be created ...
	I1018 12:01:01.740153  838384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:01:01.750661  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:01:01.822559  838384 system_pods.go:86] 18 kube-system pods found
	I1018 12:01:01.822661  838384 system_pods.go:89] "coredns-66bc5c9577-rz5sw" [23b01fea-377e-4e85-b4b9-43cffb7a2115] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:01:01.822687  838384 system_pods.go:89] "csi-hostpath-attacher-0" [cdf5620f-c158-4244-b45c-ac4ddb0fa5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:01:01.822728  838384 system_pods.go:89] "csi-hostpath-resizer-0" [4bfe92a2-f829-4cee-b780-865dd586be0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:01:01.822757  838384 system_pods.go:89] "csi-hostpathplugin-c4hl2" [ffd4b011-f37c-4d04-9594-348f79fcaede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:01:01.822780  838384 system_pods.go:89] "etcd-addons-481299" [1d180bcd-d717-43a3-9fcd-a2b5949ccfcb] Running
	I1018 12:01:01.822805  838384 system_pods.go:89] "kube-apiserver-addons-481299" [6a242323-8d27-4588-bb46-5941bfa905c4] Running
	I1018 12:01:01.822838  838384 system_pods.go:89] "kube-controller-manager-addons-481299" [9b24c454-5667-44c7-8e9b-2f45d28f5326] Running
	I1018 12:01:01.822870  838384 system_pods.go:89] "kube-ingress-dns-minikube" [7933687a-0aa6-4713-ab5f-24182df04f20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:01:01.822891  838384 system_pods.go:89] "kube-proxy-9r8gt" [8de8d3c6-c36e-49e1-b752-840ce946b92b] Running
	I1018 12:01:01.822912  838384 system_pods.go:89] "kube-scheduler-addons-481299" [44d002a1-a3ee-4c9b-be4d-89f41057996a] Running
	I1018 12:01:01.822945  838384 system_pods.go:89] "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:01:01.822968  838384 system_pods.go:89] "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:01:01.822993  838384 system_pods.go:89] "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:01:01.823021  838384 system_pods.go:89] "registry-creds-764b6fb674-mrx54" [e7e18bec-e709-4ce6-8eb5-addfe6ca58a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:01:01.823053  838384 system_pods.go:89] "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:01:01.823081  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-24v9j" [f59d92ac-2dc3-4449-b14e-544b3cd73423] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823104  838384 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xfp9p" [55297ef0-b10d-4f69-87b2-b03af107f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:01:01.823125  838384 system_pods.go:89] "storage-provisioner" [e223452f-ab0e-4338-9f10-c71d0692ca07] Running
	I1018 12:01:01.823163  838384 system_pods.go:126] duration metric: took 82.986663ms to wait for k8s-apps to be running ...
	I1018 12:01:01.823190  838384 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:01:01.823297  838384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:01:01.937035  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.020374  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.020832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.430107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.481721  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.482135  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:02.929149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:02.979391  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:02.981564  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.429257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.480836  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.481398  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:03.928420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:03.981042  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:03.981494  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.429897  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.480820  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.482111  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:04.534347  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.920939875s)
	W1018 12:01:04.534529  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534551  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.783796022s)
	I1018 12:01:04.534584  838384 retry.go:31] will retry after 344.775954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:04.534505  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.837082754s)
	I1018 12:01:04.534652  838384 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.71132558s)
	I1018 12:01:04.534680  838384 system_svc.go:56] duration metric: took 2.711486896s WaitForService to wait for kubelet
	I1018 12:01:04.534699  838384 kubeadm.go:586] duration metric: took 16.519828999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:01:04.534730  838384 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:01:04.537903  838384 addons.go:479] Verifying addon gcp-auth=true in "addons-481299"
	I1018 12:01:04.538525  838384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:01:04.538550  838384 node_conditions.go:123] node cpu capacity is 2
	I1018 12:01:04.538562  838384 node_conditions.go:105] duration metric: took 3.804704ms to run NodePressure ...
	I1018 12:01:04.538574  838384 start.go:241] waiting for startup goroutines ...
	I1018 12:01:04.541908  838384 out.go:179] * Verifying gcp-auth addon...
	I1018 12:01:04.545758  838384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:01:04.549219  838384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:01:04.549243  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:04.879563  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:04.929625  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:04.979948  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:04.980174  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.049663  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.429694  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:05.481708  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:05.482094  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:05.550159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:05.929062  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.026330  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146729751s)
	W1018 12:01:06.026368  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.026387  838384 retry.go:31] will retry after 530.445094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:06.029966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.030602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.049648  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.428850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.480186  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.480559  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:06.549270  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:06.557566  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:06.928852  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:06.981051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:06.981269  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.049233  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.428928  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.480325  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.480920  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:07.549301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:07.666703  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109094231s)
	W1018 12:01:07.666792  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.666842  838384 retry.go:31] will retry after 603.526174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:07.928884  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:07.980605  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:07.981081  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.049925  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.271551  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:08.428903  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.480126  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.481567  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:08.550087  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:08.929709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:08.982028  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:08.982437  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.049266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.386734  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.115102938s)
	W1018 12:01:09.386771  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.386790  838384 retry.go:31] will retry after 1.698280239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:09.429436  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.480507  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:09.480673  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.548565  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:09.929443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:09.981023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:09.981260  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.050256  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.428811  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.480068  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.480432  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:10.549416  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:10.929547  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:10.980558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:10.981814  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.049021  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.085333  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:11.429636  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.480969  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:11.481107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.549095  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:11.929997  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:11.981838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:11.982268  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.050261  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.218914  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133541217s)
	W1018 12:01:12.218954  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.218972  838384 retry.go:31] will retry after 1.345701913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:12.429128  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.480634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.480946  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:12.548958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:12.929428  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:12.980178  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:12.980839  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.048870  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.429927  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.478960  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:13.480243  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.550057  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:13.565398  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:13.929670  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:13.980749  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:13.981541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.050280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.430257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.481781  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.482216  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:14.549060  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:14.674833  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.10935166s)
	W1018 12:01:14.674911  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.674935  838384 retry.go:31] will retry after 4.015074409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:14.929000  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:14.980481  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:14.980936  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.049371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.428339  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.481378  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:15.549296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:15.928645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:15.979966  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:15.980109  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.049992  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.429300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.481508  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.481893  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:16.549251  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:16.929096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:16.979808  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:16.981556  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.050187  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.428204  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.488687  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:17.490476  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.549747  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:17.930135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:17.979752  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:17.980568  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.050894  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.428555  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:18.481131  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:18.483023  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:18.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:18.691077  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:18.933149  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.030226  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.030879  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.049132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.430174  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.481455  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:19.481882  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.549279  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:19.871749  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180617611s)
	W1018 12:01:19.871793  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.871814  838384 retry.go:31] will retry after 2.515978912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:19.929529  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:19.980077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:19.980426  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.052024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.429464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.479838  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.480286  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:20.548968  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:20.928596  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:20.981305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:20.981761  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.050329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.430266  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.481117  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:21.482392  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.549618  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:21.930023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:21.980648  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:21.981714  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.051678  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.387962  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:22.428709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.480821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:22.481479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.549163  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:22.929296  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:22.980516  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:22.982615  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.049456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.430314  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.483574  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.483954  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:23.548803  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:23.611192  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223187854s)
	W1018 12:01:23.611326  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.611398  838384 retry.go:31] will retry after 3.829328757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:23.930083  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:23.980905  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:01:23.983008  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.050301  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.429404  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.480464  838384 kapi.go:107] duration metric: took 27.00449331s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:01:24.481254  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:24.549300  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:24.933613  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:24.980927  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.050132  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.428170  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.480118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:25.549464  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:25.930020  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:25.980129  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.049622  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.429371  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.480502  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:26.549703  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:26.929758  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:26.981438  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.049865  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.428814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.441124  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:27.480812  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:27.549252  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:27.928712  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:27.981084  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.049705  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.430621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.481300  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:28.550023  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:28.635231  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194068908s)
	W1018 12:01:28.635269  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.635306  838384 retry.go:31] will retry after 12.348738913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:28.929164  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:28.980051  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.049370  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.429546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:29.549608  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:29.928941  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:29.980182  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.053764  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.428514  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.480543  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:30.549832  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:30.929633  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:30.980682  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.048503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.429926  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.480706  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:31.549798  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:31.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:31.981501  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.050135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.430225  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.480064  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:32.549107  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:32.930034  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:32.980601  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.050009  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.428665  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.481555  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:33.549517  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:33.930160  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:33.980721  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.049111  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.430075  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.480599  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:34.549824  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:34.928660  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:34.981106  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.049429  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.429715  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.480901  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:35.549527  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:35.928904  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:35.979975  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.049469  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.429388  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.480507  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:36.549383  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:36.928634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:36.980506  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.049878  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.428319  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.480864  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:37.549709  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:37.933484  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:37.981299  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.050356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.432420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.531156  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:38.549592  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:38.929614  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:38.980821  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.048908  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.432072  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:39.483680  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:39.549850  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:39.930073  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.042288  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.051491  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.429568  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.529694  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.592500  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:40.930099  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:40.981977  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:40.984969  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:41.048958  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.428951  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.480793  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:41.548777  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:41.935106  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:41.980130  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.050058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.376984  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.391917505s)
	W1018 12:01:42.377022  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.377040  838384 retry.go:31] will retry after 9.460130047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:42.430039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.481400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:42.549280  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:42.930471  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:42.981285  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.050674  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.429650  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:43.550049  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:43.928854  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:43.980322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.049881  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.429125  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.480353  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:44.551159  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:44.928689  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:44.980964  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.084556  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.428634  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.529995  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:45.548921  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:45.930616  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:45.981362  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.050003  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.429102  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.480267  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:46.549814  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:46.928994  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:46.993938  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.049617  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.429329  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:47.481657  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:47.550619  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:47.930499  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.032192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.054316  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.440479  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.542668  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:48.550977  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:48.929346  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:48.980748  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.048945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.449438  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.498203  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:49.551135  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:49.928541  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:49.980400  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.055907  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.428586  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.481082  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:50.549349  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:50.929024  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:50.980094  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.049047  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.431103  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.480475  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:51.549503  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:51.837922  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:51.929098  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:51.985759  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.051760  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.429677  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.480886  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:52.549039  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:52.930278  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:52.981010  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.049546  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.257837  838384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.419863779s)
	W1018 12:01:53.257928  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.257962  838384 retry.go:31] will retry after 22.303674936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:01:53.428816  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.481322  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:53.549342  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:53.931755  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:53.981558  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.049443  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.430288  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.480192  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:54.549297  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:54.928602  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:54.980986  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.049290  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.429448  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.481581  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:55.549440  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:55.929401  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:55.980439  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.049558  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.502856  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:56.503643  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.549688  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:56.981333  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:56.981570  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.049930  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.428413  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.480953  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:57.549886  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:57.929377  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:57.980404  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.049753  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.432621  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.480756  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:58.549719  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:58.929420  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:58.980375  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.049250  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.428644  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.480847  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:01:59.549883  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:01:59.928356  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:01:59.980048  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.051534  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.430130  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.481277  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:00.549945  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:00.928372  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:00.980700  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.049545  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.429844  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.480617  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:01.551570  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:01.929456  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:01.982282  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.049630  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.430524  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.481077  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:02.549352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:02.928839  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:02.986842  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.049134  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.429457  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.480188  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:03.549085  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:03.928535  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:03.980769  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.048580  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.433304  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.480494  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.550096  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:04.930036  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.980758  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.050726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.429347  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.480398  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.550148  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.928153  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.980059  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.048983  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.428741  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.481118  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.548947  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.928168  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.980304  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.049241  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.428880  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.480547  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.549892  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.928799  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.981385  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.049855  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.430001  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.480903  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.548710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.932157  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.980207  838384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.052710  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.458645  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.502292  838384 kapi.go:107] duration metric: took 1m12.025397786s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:02:09.550352  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.930139  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.049906  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.428576  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.550008  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.928058  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.049512  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.430390  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.552726  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.929915  838384 kapi.go:107] duration metric: took 1m10.504798048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:02:12.049079  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.550257  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.050305  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.549966  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.049761  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.549742  838384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.050822  838384 kapi.go:107] duration metric: took 1m10.505062422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:02:15.055403  838384 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-481299 cluster.
	I1018 12:02:15.059410  838384 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:02:15.062510  838384 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:02:15.562479  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:16.467612  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:16.467639  838384 retry.go:31] will retry after 33.846916648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:50.315580  838384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:51.158051  838384 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:51.158143  838384 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:02:51.163303  838384 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, nvidia-device-plugin, registry-creds, ingress-dns, volcano, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:02:51.167339  838384 addons.go:514] duration metric: took 2m3.15084241s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher nvidia-device-plugin registry-creds ingress-dns volcano metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:02:51.167414  838384 start.go:246] waiting for cluster config update ...
	I1018 12:02:51.167438  838384 start.go:255] writing updated cluster config ...
	I1018 12:02:51.167767  838384 ssh_runner.go:195] Run: rm -f paused
	I1018 12:02:51.172083  838384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:51.175895  838384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.182054  838384 pod_ready.go:94] pod "coredns-66bc5c9577-rz5sw" is "Ready"
	I1018 12:02:51.182084  838384 pod_ready.go:86] duration metric: took 6.16488ms for pod "coredns-66bc5c9577-rz5sw" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.184338  838384 pod_ready.go:83] waiting for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.189308  838384 pod_ready.go:94] pod "etcd-addons-481299" is "Ready"
	I1018 12:02:51.189338  838384 pod_ready.go:86] duration metric: took 4.974639ms for pod "etcd-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.191936  838384 pod_ready.go:83] waiting for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.196757  838384 pod_ready.go:94] pod "kube-apiserver-addons-481299" is "Ready"
	I1018 12:02:51.196779  838384 pod_ready.go:86] duration metric: took 4.818772ms for pod "kube-apiserver-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.199224  838384 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.576040  838384 pod_ready.go:94] pod "kube-controller-manager-addons-481299" is "Ready"
	I1018 12:02:51.576065  838384 pod_ready.go:86] duration metric: took 376.813094ms for pod "kube-controller-manager-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:51.776236  838384 pod_ready.go:83] waiting for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.176297  838384 pod_ready.go:94] pod "kube-proxy-9r8gt" is "Ready"
	I1018 12:02:52.176326  838384 pod_ready.go:86] duration metric: took 400.06131ms for pod "kube-proxy-9r8gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.376266  838384 pod_ready.go:83] waiting for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776669  838384 pod_ready.go:94] pod "kube-scheduler-addons-481299" is "Ready"
	I1018 12:02:52.776749  838384 pod_ready.go:86] duration metric: took 400.404692ms for pod "kube-scheduler-addons-481299" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:02:52.776777  838384 pod_ready.go:40] duration metric: took 1.604654564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:02:52.836011  838384 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:02:52.839584  838384 out.go:179] * Done! kubectl is now configured to use "addons-481299" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 18 12:07:47 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:07:47Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 18 12:07:50 addons-481299 dockerd[1124]: time="2025-10-18T12:07:50.640117865Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:07:50 addons-481299 dockerd[1124]: time="2025-10-18T12:07:50.729241300Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:08:39 addons-481299 dockerd[1124]: time="2025-10-18T12:08:39.787042828Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:08:39 addons-481299 dockerd[1124]: time="2025-10-18T12:08:39.844824282Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:08:39 addons-481299 dockerd[1124]: time="2025-10-18T12:08:39.935184943Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:09:08 addons-481299 dockerd[1124]: time="2025-10-18T12:09:08.553000202Z" level=info msg="ignoring event" container=4d11f410c3b8c98852ff14619eda28eb805581426a40d9efdefd406f18815f04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:09:23 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:09:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3a466883749eb99a51ce5a71e37bc07f607ee3a8af95ff22135d555287c3eca/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:09:23 addons-481299 dockerd[1124]: time="2025-10-18T12:09:23.976236581Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:09:24 addons-481299 dockerd[1124]: time="2025-10-18T12:09:24.162592999Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:09:24 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:09:24Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Oct 18 12:09:38 addons-481299 dockerd[1124]: time="2025-10-18T12:09:38.643879071Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:09:38 addons-481299 dockerd[1124]: time="2025-10-18T12:09:38.750580748Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:02 addons-481299 dockerd[1124]: time="2025-10-18T12:10:02.633707160Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:10:02 addons-481299 dockerd[1124]: time="2025-10-18T12:10:02.736591036Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:06 addons-481299 dockerd[1124]: time="2025-10-18T12:10:06.803231269Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:53 addons-481299 dockerd[1124]: time="2025-10-18T12:10:53.626885605Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:10:53 addons-481299 dockerd[1124]: time="2025-10-18T12:10:53.717983549Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:11:24 addons-481299 dockerd[1124]: time="2025-10-18T12:11:24.018286377Z" level=info msg="ignoring event" container=a3a466883749eb99a51ce5a71e37bc07f607ee3a8af95ff22135d555287c3eca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:11:54 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/107cd0999ccf87c48f9e724bd424a2e567e6d0af6f99d64ebf5984e860e08ebd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 18 12:11:54 addons-481299 dockerd[1124]: time="2025-10-18T12:11:54.590527302Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:11:54 addons-481299 dockerd[1124]: time="2025-10-18T12:11:54.790784425Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:11:54 addons-481299 cri-dockerd[1425]: time="2025-10-18T12:11:54Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Oct 18 12:12:07 addons-481299 dockerd[1124]: time="2025-10-18T12:12:07.617628514Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:12:07 addons-481299 dockerd[1124]: time="2025-10-18T12:12:07.735046551Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	589ac0aebe47c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   f3d0a3d6da576       busybox                                     default
	4d48c965fce74       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	8a05108e7b43d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             10 minutes ago      Running             controller                               0                   431b9cc69faad       ingress-nginx-controller-675c5ddd98-285md   ingress-nginx
	f83395435c560       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	872fdd27e58a8       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	bfe8ec14dbc5b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	fece67ae97aa8       9a80c0c8eb61c                                                                                                                                10 minutes ago      Exited              patch                                    2                   1d4d4fefc0e58       ingress-nginx-admission-patch-2wvtk         ingress-nginx
	3c690158312df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	9d6a1790759ab       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   a9040c2ca882e       csi-hostpathplugin-c4hl2                    kube-system
	59681c39be97d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   63f35a2a9593c       csi-hostpath-resizer-0                      kube-system
	e3a81c28c5300       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   0a72279b138c8       csi-hostpath-attacher-0                     kube-system
	53b6871ca3d73       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   10 minutes ago      Exited              create                                   0                   a56a89e8a43cc       ingress-nginx-admission-create-xtv6g        ingress-nginx
	81902e2733f2b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd                            10 minutes ago      Running             gadget                                   0                   680d644fbcd5c       gadget-fg7d4                                gadget
	3f96c18e36d8d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   0609845efda87       snapshot-controller-7d9fbc56b8-xfp9p        kube-system
	e1a780c238dba       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   f7f6e990ad6a6       snapshot-controller-7d9fbc56b8-24v9j        kube-system
	b202ac5239a46       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        10 minutes ago      Running             metrics-server                           0                   866ba49820966       metrics-server-85b7d694d7-7dj5n             kube-system
	3315bc7f0cf33       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   37d7d55385491       local-path-provisioner-648f6765c9-cdrdl     local-path-storage
	eab72ef0e77c8       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         10 minutes ago      Running             minikube-ingress-dns                     0                   50079659ba367       kube-ingress-dns-minikube                   kube-system
	b7a4f459688f4       gcr.io/cloud-spanner-emulator/emulator@sha256:335f6daa572494373ab0e16f6f574aced7425f3755182faf42089f838d6f38e1                               10 minutes ago      Running             cloud-spanner-emulator                   0                   e32407548d49a       cloud-spanner-emulator-86bd5cbb97-dzsz8     default
	25c9372900a26       ba04bb24b9575                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   4004c0456a8a4       storage-provisioner                         kube-system
	5797fb80a206f       138784d87c9c5                                                                                                                                11 minutes ago      Running             coredns                                  0                   5fe43b09fa9ef       coredns-66bc5c9577-rz5sw                    kube-system
	dae56d7c71c17       05baa95f5142d                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   ecfef3e21c7ae       kube-proxy-9r8gt                            kube-system
	dba8e74f2f998       b5f57ec6b9867                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   5c4a90f92f416       kube-scheduler-addons-481299                kube-system
	c52ddb45ce912       7eb2c6ff0c5a7                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   2332636f8cc54       kube-controller-manager-addons-481299       kube-system
	9523d2b24f1a6       a1894772a478e                                                                                                                                11 minutes ago      Running             etcd                                     0                   ea5e934575f93       etcd-addons-481299                          kube-system
	48f480a306b71       43911e833d64d                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   ddf248b652d44       kube-apiserver-addons-481299                kube-system
	
	
	==> controller_ingress [8a05108e7b43] <==
	I1018 12:02:08.924425       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1018 12:02:09.031156       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1018 12:02:09.043091       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1018 12:02:09.058313       6 nginx.go:273] "Starting NGINX Ingress controller"
	I1018 12:02:09.082853       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"42fc76e3-0c78-4800-9165-c304c60bade9", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1018 12:02:09.082897       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"bad76739-6da4-44ee-ad7e-7821403e700d", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1018 12:02:09.082907       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"46559a9e-dfe6-4342-b6c4-a2e0d79ffec0", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1018 12:02:10.260707       6 nginx.go:319] "Starting NGINX process"
	I1018 12:02:10.260918       6 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1018 12:02:10.261398       6 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1018 12:02:10.261848       6 controller.go:214] "Configuration changes detected, backend reload required"
	I1018 12:02:10.271851       6 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1018 12:02:10.273337       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-285md"
	I1018 12:02:10.280403       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	I1018 12:02:10.330867       6 controller.go:228] "Backend successfully reloaded"
	I1018 12:02:10.330933       6 controller.go:240] "Initial sync, sleeping for 1 second"
	I1018 12:02:10.331249       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-285md", UID:"95fa65aa-137a-4c83-b067-43da76402c6e", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1018 12:02:10.381659       6 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-285md" node="addons-481299"
	  Release:       v1.13.3
	  Build:         93851f05e61d99eea49140c9be73499a3cb92ccc
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [5797fb80a206] <==
	[INFO] 10.244.0.7:54253 - 32829 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002752352s
	[INFO] 10.244.0.7:54253 - 48434 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000139218s
	[INFO] 10.244.0.7:54253 - 23824 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103517s
	[INFO] 10.244.0.7:35646 - 48715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276574s
	[INFO] 10.244.0.7:35646 - 52325 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112625s
	[INFO] 10.244.0.7:45943 - 64938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090488s
	[INFO] 10.244.0.7:45943 - 65152 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099734s
	[INFO] 10.244.0.7:33837 - 21640 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105166s
	[INFO] 10.244.0.7:33837 - 21813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009532s
	[INFO] 10.244.0.7:58089 - 39529 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295941s
	[INFO] 10.244.0.7:58089 - 39709 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001006773s
	[INFO] 10.244.0.7:49666 - 228 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115406s
	[INFO] 10.244.0.7:49666 - 392 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093999s
	[INFO] 10.244.0.26:58341 - 57714 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271404s
	[INFO] 10.244.0.26:33384 - 49094 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121051s
	[INFO] 10.244.0.26:51350 - 39772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197935s
	[INFO] 10.244.0.26:51818 - 1006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135993s
	[INFO] 10.244.0.26:60342 - 22606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138184s
	[INFO] 10.244.0.26:55393 - 64846 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151477s
	[INFO] 10.244.0.26:33581 - 50204 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064442s
	[INFO] 10.244.0.26:57107 - 22259 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002808697s
	[INFO] 10.244.0.26:34046 - 55453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002593408s
	[INFO] 10.244.0.26:55019 - 57919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003283041s
	[INFO] 10.244.0.29:34729 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227268s
	[INFO] 10.244.0.29:39903 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120166s
	
	
	==> describe nodes <==
	Name:               addons-481299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-481299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-481299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_00_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-481299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-481299"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:00:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-481299
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:12:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:10:14 +0000   Sat, 18 Oct 2025 12:00:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-481299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9d0b013c-4a29-4459-8491-dd5d81ae22da
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  default                     cloud-spanner-emulator-86bd5cbb97-dzsz8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  gadget                      gadget-fg7d4                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-285md                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-rz5sw                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-c4hl2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-481299                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-481299                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-481299                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9r8gt                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-481299                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-85b7d694d7-7dj5n                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 registry-creds-764b6fb674-mrx54                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-24v9j                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-xfp9p                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  local-path-storage          local-path-provisioner-648f6765c9-cdrdl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-481299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-481299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-481299 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node addons-481299 event: Registered Node addons-481299 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-481299 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	[Oct18 10:38] overlayfs: idmapped layers are currently not supported
	[ +31.744006] overlayfs: idmapped layers are currently not supported
	[Oct18 10:40] overlayfs: idmapped layers are currently not supported
	[  +7.638461] overlayfs: idmapped layers are currently not supported
	[Oct18 10:42] overlayfs: idmapped layers are currently not supported
	[ +14.986088] overlayfs: idmapped layers are currently not supported
	[Oct18 10:44] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:59] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9523d2b24f1a] <==
	{"level":"warn","ts":"2025-10-18T12:00:38.390619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.407793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.439357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.481909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.499910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.531023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:00:38.698131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.110540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:03.132268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.020980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.073818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.097695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.122888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.166204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.187897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.301315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.414291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.428689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.446079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.463288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.498849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:17.513965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:10:36.778316Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2025-10-18T12:10:36.842558Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1761,"took":"61.836891ms","hash":840814350,"current-db-size-bytes":10379264,"current-db-size":"10 MB","current-db-size-in-use-bytes":6549504,"current-db-size-in-use":"6.5 MB"}
	{"level":"info","ts":"2025-10-18T12:10:36.842623Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":840814350,"revision":1761,"compact-revision":-1}
	
	
	==> kernel <==
	 12:12:09 up  3:54,  0 user,  load average: 0.70, 1.08, 2.25
	Linux addons-481299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [48f480a306b7] <==
	I1018 12:06:14.346261       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	E1018 12:06:14.432289       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	E1018 12:06:14.437579       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	E1018 12:06:14.471263       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	I1018 12:06:14.538453       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:14.605852       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:14.732689       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:14.759022       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:14.794923       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1018 12:06:15.131412       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1018 12:06:15.349953       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	W1018 12:06:15.357404       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1018 12:06:15.539391       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I1018 12:06:15.717485       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1018 12:06:15.727618       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1018 12:06:15.826396       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1018 12:06:15.848126       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1018 12:06:15.872978       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1018 12:06:15.954825       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1018 12:06:16.718440       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1018 12:06:17.004308       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1018 12:06:35.331248       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50972: use of closed network connection
	E1018 12:06:35.593799       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50998: use of closed network connection
	E1018 12:06:35.767189       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51012: use of closed network connection
	I1018 12:10:39.868349       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c52ddb45ce91] <==
	E1018 12:10:57.597315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:04.964340       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:04.965529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:14.036705       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:14.038162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:15.454635       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:15.455815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:21.974490       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:21.975652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:31.140947       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:31.142258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:39.161133       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:39.162467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:41.783385       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:41.784477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:47.861630       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:47.862897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:48.916412       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:48.917623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:49.300505       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:49.301826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:11:50.367100       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:11:50.368458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:12:06.647431       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:12:06.648481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [dae56d7c71c1] <==
	I1018 12:00:49.514489       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:00:49.626939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:00:49.727608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:00:49.727648       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:00:49.727722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:00:49.750914       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:00:49.750964       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:00:49.754697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:00:49.754988       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:00:49.755003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:49.756307       1 config.go:200] "Starting service config controller"
	I1018 12:00:49.756319       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:00:49.756333       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:00:49.756337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:00:49.756361       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:00:49.756365       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:00:49.757732       1 config.go:309] "Starting node config controller"
	I1018 12:00:49.757742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:00:49.757747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:00:49.856694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:00:49.856705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:00:49.856722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dba8e74f2f99] <==
	I1018 12:00:40.296381       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:00:40.299837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:00:40.300872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:00:40.301158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:00:40.314385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:00:40.314937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:00:40.315179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:00:40.315409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:00:40.315594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:00:40.321949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:00:40.322190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:00:40.322439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:00:40.323011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:00:40.323347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:00:40.323536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:00:40.324267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:00:40.324450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:00:40.324637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:00:40.324822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:00:40.325293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:00:40.325523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:00:40.325545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:00:40.325592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 12:00:41.902177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.194630    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-data" (OuterVolumeSpecName: "data") pod "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2" (UID: "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.195223    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-script" (OuterVolumeSpecName: "script") pod "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2" (UID: "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.196313    2235 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-kube-api-access-clgjn" (OuterVolumeSpecName: "kube-api-access-clgjn") pod "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2" (UID: "a9153a13-0a4c-446b-9d51-e2f3ff58c0c2"). InnerVolumeSpecName "kube-api-access-clgjn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.295022    2235 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-data\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.295067    2235 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-script\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.295080    2235 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clgjn\" (UniqueName: \"kubernetes.io/projected/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2-kube-api-access-clgjn\") on node \"addons-481299\" DevicePath \"\""
	Oct 18 12:11:24 addons-481299 kubelet[2235]: I1018 12:11:24.590975    2235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9153a13-0a4c-446b-9d51-e2f3ff58c0c2" path="/var/lib/kubelet/pods/a9153a13-0a4c-446b-9d51-e2f3ff58c0c2/volumes"
	Oct 18 12:11:30 addons-481299 kubelet[2235]: E1018 12:11:30.575587    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:11:35 addons-481299 kubelet[2235]: I1018 12:11:35.575977    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:11:42 addons-481299 kubelet[2235]: E1018 12:11:42.580629    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: I1018 12:11:54.147490    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b1a54df-9e12-4a2c-a6bf-659514298233-script\") pod \"helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d\" (UID: \"3b1a54df-9e12-4a2c-a6bf-659514298233\") " pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: I1018 12:11:54.147560    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg7zg\" (UniqueName: \"kubernetes.io/projected/3b1a54df-9e12-4a2c-a6bf-659514298233-kube-api-access-zg7zg\") pod \"helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d\" (UID: \"3b1a54df-9e12-4a2c-a6bf-659514298233\") " pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: I1018 12:11:54.147594    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b1a54df-9e12-4a2c-a6bf-659514298233-data\") pod \"helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d\" (UID: \"3b1a54df-9e12-4a2c-a6bf-659514298233\") " pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: E1018 12:11:54.794116    2235 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: E1018 12:11:54.794187    2235 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: E1018 12:11:54.794277    2235 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d_local-path-storage(3b1a54df-9e12-4a2c-a6bf-659514298233): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: E1018 12:11:54.794317    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d" podUID="3b1a54df-9e12-4a2c-a6bf-659514298233"
	Oct 18 12:11:54 addons-481299 kubelet[2235]: E1018 12:11:54.885878    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d" podUID="3b1a54df-9e12-4a2c-a6bf-659514298233"
	Oct 18 12:11:55 addons-481299 kubelet[2235]: I1018 12:11:55.576280    2235 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-86bd5cbb97-dzsz8" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:11:56 addons-481299 kubelet[2235]: E1018 12:11:56.576050    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="cd2f2cff-a3be-4a44-84b1-1c62af359bbe"
	Oct 18 12:11:57 addons-481299 kubelet[2235]: E1018 12:11:57.578200    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-mrx54" podUID="e7e18bec-e709-4ce6-8eb5-addfe6ca58a2"
	Oct 18 12:12:07 addons-481299 kubelet[2235]: E1018 12:12:07.742208    2235 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:12:07 addons-481299 kubelet[2235]: E1018 12:12:07.742294    2235 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 18 12:12:07 addons-481299 kubelet[2235]: E1018 12:12:07.742387    2235 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d_local-path-storage(3b1a54df-9e12-4a2c-a6bf-659514298233): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:12:07 addons-481299 kubelet[2235]: E1018 12:12:07.742443    2235 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d" podUID="3b1a54df-9e12-4a2c-a6bf-659514298233"
	
	
	==> storage-provisioner [25c9372900a2] <==
	W1018 12:11:43.593748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:45.597349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:45.605600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:47.608675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:47.613666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:49.617040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:49.622085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:51.626469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:51.631111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:53.633899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:53.640559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:55.643847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:55.649570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:57.652523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:57.657523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:59.660375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:11:59.667553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:01.672237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:01.677353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:03.680500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:03.685097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:05.688471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:05.695457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:07.723229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:12:07.730155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-481299 -n addons-481299
helpers_test.go:269: (dbg) Run:  kubectl --context addons-481299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d: exit status 1 (124.754648ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-481299/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:07:07 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chd6h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-chd6h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-481299
	  Warning  Failed     4m23s                kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m4s (x5 over 5m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m4s (x4 over 5m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m4s (x5 over 5m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x21 over 5m2s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s (x21 over 5m2s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9qfd (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-q9qfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xtv6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wvtk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-mrx54" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-481299 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-xtv6g ingress-nginx-admission-patch-2wvtk registry-creds-764b6fb674-mrx54 helper-pod-create-pvc-9db2ae54-1927-4092-9d1d-a19e8b00be9d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.980699275s)
--- FAIL: TestAddons/parallel/LocalPath (345.93s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918162 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918162 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918162 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918162 --alsologtostderr -v=1] stderr:
I1018 12:32:25.187856  893578 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:25.189159  893578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:25.189226  893578 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:25.189236  893578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:25.189531  893578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:25.189824  893578 mustload.go:65] Loading cluster: functional-918162
I1018 12:32:25.190226  893578 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:25.190743  893578 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:25.208102  893578 host.go:66] Checking if "functional-918162" exists ...
I1018 12:32:25.208465  893578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1018 12:32:25.270758  893578 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:32:25.254019376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1018 12:32:25.270878  893578 api_server.go:166] Checking apiserver status ...
I1018 12:32:25.270955  893578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 12:32:25.270997  893578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:25.289300  893578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:25.406708  893578 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9430/cgroup
I1018 12:32:25.420193  893578 api_server.go:182] apiserver freezer: "5:freezer:/docker/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/kubepods/burstable/pod02d758f002ab1cd8a3ad5cbba5edf479/52b06613ed5c2e292362cdb55162f4caab84169aea2970ed83bedd1371d4de86"
I1018 12:32:25.420296  893578 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/kubepods/burstable/pod02d758f002ab1cd8a3ad5cbba5edf479/52b06613ed5c2e292362cdb55162f4caab84169aea2970ed83bedd1371d4de86/freezer.state
I1018 12:32:25.428254  893578 api_server.go:204] freezer state: "THAWED"
I1018 12:32:25.428316  893578 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1018 12:32:25.437947  893578 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1018 12:32:25.437993  893578 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1018 12:32:25.438184  893578 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:25.438216  893578 addons.go:69] Setting dashboard=true in profile "functional-918162"
I1018 12:32:25.438233  893578 addons.go:238] Setting addon dashboard=true in "functional-918162"
I1018 12:32:25.438263  893578 host.go:66] Checking if "functional-918162" exists ...
I1018 12:32:25.438684  893578 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:25.460475  893578 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1018 12:32:25.463533  893578 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1018 12:32:25.466310  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1018 12:32:25.466333  893578 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1018 12:32:25.466408  893578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:25.483952  893578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:25.596228  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1018 12:32:25.596253  893578 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1018 12:32:25.610654  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1018 12:32:25.610681  893578 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1018 12:32:25.623970  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1018 12:32:25.623994  893578 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1018 12:32:25.637862  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1018 12:32:25.637883  893578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1018 12:32:25.651361  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1018 12:32:25.651393  893578 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1018 12:32:25.665350  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1018 12:32:25.665396  893578 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1018 12:32:25.679140  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1018 12:32:25.679163  893578 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1018 12:32:25.694132  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1018 12:32:25.694161  893578 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1018 12:32:25.709678  893578 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1018 12:32:25.709723  893578 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1018 12:32:25.722860  893578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1018 12:32:26.576325  893578 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-918162 addons enable metrics-server

                                                
                                                
I1018 12:32:26.579209  893578 addons.go:201] Writing out "functional-918162" config to set dashboard=true...
W1018 12:32:26.579522  893578 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1018 12:32:26.580267  893578 kapi.go:59] client config for functional-918162: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.key", CAFile:"/home/jenkins/minikube-integration/21647-835760/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1018 12:32:26.580955  893578 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1018 12:32:26.580977  893578 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1018 12:32:26.580984  893578 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1018 12:32:26.581059  893578 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1018 12:32:26.581070  893578 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1018 12:32:26.604371  893578 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  a3ed9bac-ae9f-432a-a277-3527a0fdb4a6 1313 0 2025-10-18 12:32:26 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-18 12:32:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.28.224,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.28.224],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1018 12:32:26.604570  893578 out.go:285] * Launching proxy ...
* Launching proxy ...
I1018 12:32:26.604676  893578 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-918162 proxy --port 36195]
I1018 12:32:26.604997  893578 dashboard.go:157] Waiting for kubectl to output host:port ...
I1018 12:32:26.653773  893578 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1018 12:32:26.653829  893578 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1018 12:32:26.671545  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04ef49d5-9d74-413c-be1c-50e70178bb9e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2a00 TLS:<nil>}
I1018 12:32:26.671624  893578 retry.go:31] will retry after 138.782µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.678122  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04ec9281-c28e-4605-9a35-625ef4953bcb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2b40 TLS:<nil>}
I1018 12:32:26.678184  893578 retry.go:31] will retry after 215.472µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.682721  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96f2c8d5-b7b3-4e97-9da0-c0395070b2b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a29c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2c80 TLS:<nil>}
I1018 12:32:26.682784  893578 retry.go:31] will retry after 335.073µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.687932  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dfc6ebaa-be54-445e-8eba-df719ee6942c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2dc0 TLS:<nil>}
I1018 12:32:26.688010  893578 retry.go:31] will retry after 332.355µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.692512  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a84133c9-7e45-4fd3-b6a7-d3bfca09c836] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2f00 TLS:<nil>}
I1018 12:32:26.692577  893578 retry.go:31] will retry after 592.278µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.698333  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce5bba87-2e3c-4c51-9d4e-1c3a39f6ea6d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3040 TLS:<nil>}
I1018 12:32:26.698405  893578 retry.go:31] will retry after 755.265µs: Temporary Error: unexpected response code: 503
I1018 12:32:26.702574  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3a4faa5-7328-4b58-abc5-d8998f4ce450] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a2c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3180 TLS:<nil>}
I1018 12:32:26.702656  893578 retry.go:31] will retry after 1.312218ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.708659  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4434c1c8-b11b-4e51-819a-e13d9780966c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000486c80 TLS:<nil>}
I1018 12:32:26.708725  893578 retry.go:31] will retry after 1.180393ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.726174  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f461717c-6cb1-47df-ae83-94646e97c778] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000486dc0 TLS:<nil>}
I1018 12:32:26.726239  893578 retry.go:31] will retry after 2.395841ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.732362  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2c5f4766-c18f-4f07-a81d-197820246cde] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000486f00 TLS:<nil>}
I1018 12:32:26.732434  893578 retry.go:31] will retry after 3.275854ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.739557  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c767d085-b171-4218-bafe-03c0fbef85ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487040 TLS:<nil>}
I1018 12:32:26.739619  893578 retry.go:31] will retry after 7.624788ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.750980  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[063c9f02-53b1-4812-87af-4b500c42502c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a3040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487180 TLS:<nil>}
I1018 12:32:26.751070  893578 retry.go:31] will retry after 10.039389ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.765687  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[943d60c7-0595-4c02-b25a-dcee754d3ef5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004872c0 TLS:<nil>}
I1018 12:32:26.765754  893578 retry.go:31] will retry after 7.442634ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.777653  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a964a86-bf9b-4cfa-978c-c74d734d6b8a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a31c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487400 TLS:<nil>}
I1018 12:32:26.777749  893578 retry.go:31] will retry after 16.050051ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.797097  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[628b6c8e-152d-41a7-ba61-275629c9f8f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487540 TLS:<nil>}
I1018 12:32:26.797214  893578 retry.go:31] will retry after 15.684923ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.816220  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e1575137-f0e2-4781-8d71-84f5d0ab3a73] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007a3380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b32c0 TLS:<nil>}
I1018 12:32:26.816286  893578 retry.go:31] will retry after 41.806032ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.861831  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d0abacf-5cc6-4a7b-8607-9f4be00485d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487900 TLS:<nil>}
I1018 12:32:26.861900  893578 retry.go:31] will retry after 48.677331ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.915216  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9643f680-bc5d-4e31-a58b-391efb2a4373] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f0740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487a40 TLS:<nil>}
I1018 12:32:26.915285  893578 retry.go:31] will retry after 53.336712ms: Temporary Error: unexpected response code: 503
I1018 12:32:26.972821  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8e459b1-e880-4369-b4cd-5d84163cb778] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:26 GMT]] Body:0x40007f07c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487b80 TLS:<nil>}
I1018 12:32:26.972890  893578 retry.go:31] will retry after 142.101665ms: Temporary Error: unexpected response code: 503
I1018 12:32:27.120469  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7e31510-db78-435a-9a17-4f3cec13e6e4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:27 GMT]] Body:0x40007f08c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487cc0 TLS:<nil>}
I1018 12:32:27.120535  893578 retry.go:31] will retry after 328.207048ms: Temporary Error: unexpected response code: 503
I1018 12:32:27.452158  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc89e61b-2977-4dcc-80dd-14c0981f7d60] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:27 GMT]] Body:0x40007f0940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280000 TLS:<nil>}
I1018 12:32:27.452250  893578 retry.go:31] will retry after 279.834678ms: Temporary Error: unexpected response code: 503
I1018 12:32:27.735710  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06600dab-2442-4c12-93c1-0bb58008ee78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:27 GMT]] Body:0x40007f0a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3400 TLS:<nil>}
I1018 12:32:27.735797  893578 retry.go:31] will retry after 650.997177ms: Temporary Error: unexpected response code: 503
I1018 12:32:28.390708  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b6140456-7ce2-467c-a2d7-cde73341e699] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:28 GMT]] Body:0x40007f0b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3540 TLS:<nil>}
I1018 12:32:28.390776  893578 retry.go:31] will retry after 744.496222ms: Temporary Error: unexpected response code: 503
I1018 12:32:29.139070  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b57dcad-07be-409b-afd6-6ff6d5a07a34] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:29 GMT]] Body:0x40016f4000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3680 TLS:<nil>}
I1018 12:32:29.139139  893578 retry.go:31] will retry after 993.325158ms: Temporary Error: unexpected response code: 503
I1018 12:32:30.136270  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e02031b6-e737-4ffe-817f-e257a438064e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:30 GMT]] Body:0x40007f0c00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280140 TLS:<nil>}
I1018 12:32:30.136349  893578 retry.go:31] will retry after 848.219139ms: Temporary Error: unexpected response code: 503
I1018 12:32:30.991270  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34a36cec-3216-4ab1-a320-5568c5565e51] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:30 GMT]] Body:0x40016f4140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002803c0 TLS:<nil>}
I1018 12:32:30.991346  893578 retry.go:31] will retry after 3.554823833s: Temporary Error: unexpected response code: 503
I1018 12:32:34.549891  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c825f4e7-d499-4678-b2e8-58fb8865a780] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:34 GMT]] Body:0x40007f0d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3a40 TLS:<nil>}
I1018 12:32:34.549956  893578 retry.go:31] will retry after 2.90400978s: Temporary Error: unexpected response code: 503
I1018 12:32:37.457393  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a45047a-9444-4fde-8643-9b49812cb475] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:37 GMT]] Body:0x40016f4240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280640 TLS:<nil>}
I1018 12:32:37.457457  893578 retry.go:31] will retry after 2.891700538s: Temporary Error: unexpected response code: 503
I1018 12:32:40.352884  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a558d4f0-c5bd-49f9-b01c-55e0cce1f394] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:40 GMT]] Body:0x40007f0f00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3b80 TLS:<nil>}
I1018 12:32:40.352938  893578 retry.go:31] will retry after 9.157036509s: Temporary Error: unexpected response code: 503
I1018 12:32:49.514665  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[77f94583-a7cf-4320-821a-289433f813ef] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:49 GMT]] Body:0x40016f42c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3cc0 TLS:<nil>}
I1018 12:32:49.514731  893578 retry.go:31] will retry after 11.61986221s: Temporary Error: unexpected response code: 503
I1018 12:33:01.137970  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce519d8c-5edb-4679-b92b-2a80fbf01159] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:33:01 GMT]] Body:0x40007f1180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280780 TLS:<nil>}
I1018 12:33:01.138043  893578 retry.go:31] will retry after 13.126664117s: Temporary Error: unexpected response code: 503
I1018 12:33:14.270560  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[91f7b896-7e8f-4b72-9dbc-845a643e859c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:33:14 GMT]] Body:0x40016f43c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002cc000 TLS:<nil>}
I1018 12:33:14.270620  893578 retry.go:31] will retry after 22.060740462s: Temporary Error: unexpected response code: 503
I1018 12:33:36.335181  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6afc4ae7-7ad7-4af4-bc05-82c1c48e9ef8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:33:36 GMT]] Body:0x40016f4480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280a00 TLS:<nil>}
I1018 12:33:36.335244  893578 retry.go:31] will retry after 40.030025474s: Temporary Error: unexpected response code: 503
I1018 12:34:16.368550  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0db25db-e788-43fd-ac63-3de31c9f3f5d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:34:16 GMT]] Body:0x40007f1300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002cc140 TLS:<nil>}
I1018 12:34:16.368614  893578 retry.go:31] will retry after 1m17.754779375s: Temporary Error: unexpected response code: 503
I1018 12:35:34.128088  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d236e89d-7a7b-4b4e-b15d-5c11b5d1aed4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:35:34 GMT]] Body:0x40007f01c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280b40 TLS:<nil>}
I1018 12:35:34.128159  893578 retry.go:31] will retry after 45.580808996s: Temporary Error: unexpected response code: 503
I1018 12:36:19.713759  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e88e8dee-70db-4e7c-bed9-ddcee8cbb4ea] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:36:19 GMT]] Body:0x40007f0300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280dc0 TLS:<nil>}
I1018 12:36:19.713824  893578 retry.go:31] will retry after 40.948484086s: Temporary Error: unexpected response code: 503
I1018 12:37:00.666400  893578 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bdac6481-50cc-4009-9146-8019dd0d330e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:37:00 GMT]] Body:0x40016f4100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000280f00 TLS:<nil>}
I1018 12:37:00.666472  893578 retry.go:31] will retry after 36.282143779s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-918162
helpers_test.go:243: (dbg) docker inspect functional-918162:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8",
	        "Created": "2025-10-18T12:22:45.83261912Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 877349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:22:45.894451376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/hosts",
	        "LogPath": "/var/lib/docker/containers/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8/b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8-json.log",
	        "Name": "/functional-918162",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-918162:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-918162",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b297a8f4fc2f2e8879f6d8195ff8f6c6a21ddb57382bbbb687a07a107a6bd0e8",
	                "LowerDir": "/var/lib/docker/overlay2/c5d68f623247a04dfafb62754688aed17b5f438d6e2f7ff44d92b81b685f8fbe-init/diff:/var/lib/docker/overlay2/6da29ec63a7e3e82b22991cef36ed721f46b0e2f648634b12d0ad6b7225284d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d68f623247a04dfafb62754688aed17b5f438d6e2f7ff44d92b81b685f8fbe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d68f623247a04dfafb62754688aed17b5f438d6e2f7ff44d92b81b685f8fbe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d68f623247a04dfafb62754688aed17b5f438d6e2f7ff44d92b81b685f8fbe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-918162",
	                "Source": "/var/lib/docker/volumes/functional-918162/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-918162",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-918162",
	                "name.minikube.sigs.k8s.io": "functional-918162",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a0ab3c74eb00b3bb676fe7929e97254c611d01b2c9c669b4092df699f5b50de",
	            "SandboxKey": "/var/run/docker/netns/6a0ab3c74eb0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33890"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-918162": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:0a:c8:6b:80:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ff9ac25e77b7e7618225d06af64a72055e200b9b9eca31ce32ab76a253baeeb",
	                    "EndpointID": "d57721b0efaa49e132693a4a8abb9710525eab36127bc54dda3f0b9e53786555",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-918162",
	                        "b297a8f4fc2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-918162 -n functional-918162
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-918162 logs -n 25: (1.217418799s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-918162 ssh sudo cat /etc/ssl/certs/837622.pem                                                                   │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /usr/share/ca-certificates/837622.pem                                                       │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /etc/ssl/certs/8376222.pem                                                                  │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /usr/share/ca-certificates/8376222.pem                                                      │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ docker-env     │ functional-918162 docker-env                                                                                               │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ docker-env     │ functional-918162 docker-env                                                                                               │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh sudo cat /etc/test/nested/copy/837622/hosts                                                          │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ cp             │ functional-918162 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh -n functional-918162 sudo cat /home/docker/cp-test.txt                                               │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ cp             │ functional-918162 cp functional-918162:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1393531691/001/cp-test.txt │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh -n functional-918162 sudo cat /home/docker/cp-test.txt                                               │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ cp             │ functional-918162 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh -n functional-918162 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ image          │ functional-918162 image ls --format short --alsologtostderr                                                                │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ image          │ functional-918162 image ls --format yaml --alsologtostderr                                                                 │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ ssh            │ functional-918162 ssh pgrep buildkitd                                                                                      │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │                     │
	│ image          │ functional-918162 image build -t localhost/my-image:functional-918162 testdata/build --alsologtostderr                     │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ image          │ functional-918162 image ls                                                                                                 │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ image          │ functional-918162 image ls --format json --alsologtostderr                                                                 │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ image          │ functional-918162 image ls --format table --alsologtostderr                                                                │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ update-context │ functional-918162 update-context --alsologtostderr -v=2                                                                    │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ update-context │ functional-918162 update-context --alsologtostderr -v=2                                                                    │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	│ update-context │ functional-918162 update-context --alsologtostderr -v=2                                                                    │ functional-918162 │ jenkins │ v1.37.0 │ 18 Oct 25 12:32 UTC │ 18 Oct 25 12:32 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:32:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:32:24.907874  893450 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:32:24.908096  893450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.908122  893450 out.go:374] Setting ErrFile to fd 2...
	I1018 12:32:24.908139  893450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.908419  893450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:32:24.908816  893450 out.go:368] Setting JSON to false
	I1018 12:32:24.911934  893450 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15295,"bootTime":1760775450,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 12:32:24.912723  893450 start.go:141] virtualization:  
	I1018 12:32:24.916206  893450 out.go:179] * [functional-918162] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:32:24.920555  893450 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:32:24.920630  893450 notify.go:220] Checking for updates...
	I1018 12:32:24.927521  893450 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:32:24.930529  893450 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:32:24.933509  893450 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 12:32:24.937146  893450 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:32:24.940153  893450 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:32:24.943525  893450 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:32:24.944105  893450 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:32:24.975282  893450 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:32:24.975515  893450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:32:25.047586  893450 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:32:25.034441369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:32:25.047697  893450 docker.go:318] overlay module found
	I1018 12:32:25.050756  893450 out.go:179] * Using the docker driver based on existing profile
	I1018 12:32:25.053706  893450 start.go:305] selected driver: docker
	I1018 12:32:25.053730  893450 start.go:925] validating driver "docker" against &{Name:functional-918162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-918162 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:32:25.053836  893450 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:32:25.053940  893450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:32:25.121636  893450 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:32:25.105444216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:32:25.122862  893450 cni.go:84] Creating CNI manager for ""
	I1018 12:32:25.122947  893450 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 12:32:25.123014  893450 start.go:349] cluster config:
	{Name:functional-918162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-918162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:32:25.127216  893450 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Oct 18 12:32:27 functional-918162 cri-dockerd[7717]: time="2025-10-18T12:32:27Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Oct 18 12:32:27 functional-918162 dockerd[6961]: time="2025-10-18T12:32:27.261351792Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:32:27 functional-918162 dockerd[6961]: time="2025-10-18T12:32:27.387279207Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:32:27 functional-918162 dockerd[6961]: time="2025-10-18T12:32:27.414119501Z" level=info msg="ignoring event" container=6d96581b085f2e162ed02b525a556c65e1ce839e24215132767ef937a8fce7e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:32:27 functional-918162 dockerd[6961]: time="2025-10-18T12:32:27.468843106Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:32:27 functional-918162 dockerd[6961]: time="2025-10-18T12:32:27.557648907Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:32:28 functional-918162 dockerd[6961]: time="2025-10-18T12:32:28.645158028Z" level=info msg="ignoring event" container=92909eac19fd1ec3a33ad04a46ec34b1aeb443f077797f841e87a4549528ac88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 18 12:32:39 functional-918162 dockerd[6961]: time="2025-10-18T12:32:39.466361593Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:32:39 functional-918162 dockerd[6961]: time="2025-10-18T12:32:39.570508638Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:32:43 functional-918162 dockerd[6961]: time="2025-10-18T12:32:43.477477945Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:32:43 functional-918162 dockerd[6961]: time="2025-10-18T12:32:43.572680526Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:33:07 functional-918162 dockerd[6961]: time="2025-10-18T12:33:07.454692528Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:33:07 functional-918162 dockerd[6961]: time="2025-10-18T12:33:07.546065715Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:33:10 functional-918162 dockerd[6961]: time="2025-10-18T12:33:10.450293296Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:33:10 functional-918162 dockerd[6961]: time="2025-10-18T12:33:10.619973470Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:33:10 functional-918162 cri-dockerd[7717]: time="2025-10-18T12:33:10Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Oct 18 12:33:52 functional-918162 dockerd[6961]: time="2025-10-18T12:33:52.451476902Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:33:52 functional-918162 dockerd[6961]: time="2025-10-18T12:33:52.546315142Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:33:53 functional-918162 dockerd[6961]: time="2025-10-18T12:33:53.456677255Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:33:53 functional-918162 dockerd[6961]: time="2025-10-18T12:33:53.549003620Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:35:25 functional-918162 dockerd[6961]: time="2025-10-18T12:35:25.451126023Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:35:25 functional-918162 dockerd[6961]: time="2025-10-18T12:35:25.650867605Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:35:25 functional-918162 cri-dockerd[7717]: time="2025-10-18T12:35:25Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Oct 18 12:35:27 functional-918162 dockerd[6961]: time="2025-10-18T12:35:27.456654723Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:35:27 functional-918162 dockerd[6961]: time="2025-10-18T12:35:27.538588434Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6d96581b085f2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   92909eac19fd1       busybox-mount                               default
	2d25de80867f0       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   0b10a6ce737b7       hello-node-connect-7d85dfc575-rgr5j         default
	4e13f7819475a       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   1a6d6fe74eac3       hello-node-75c85bcc94-q8vg9                 default
	3e22d81a23c58       nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6                         11 minutes ago      Running             myfrontend                0                   7f3c0f3e15410       sp-pod                                      default
	368b4e95d3c39       nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                         11 minutes ago      Running             nginx                     0                   34d2d301c88f1       nginx-svc                                   default
	c33b253cbb0d3       05baa95f5142d                                                                                         11 minutes ago      Running             kube-proxy                2                   dda2d29d1a592       kube-proxy-t2m7t                            kube-system
	c3cab8740f110       138784d87c9c5                                                                                         11 minutes ago      Running             coredns                   2                   e65a887934fc6       coredns-66bc5c9577-2jgzj                    kube-system
	110911358a41c       ba04bb24b9575                                                                                         11 minutes ago      Running             storage-provisioner       3                   a99aa647ea9c9       storage-provisioner                         kube-system
	222393d07f794       b5f57ec6b9867                                                                                         11 minutes ago      Running             kube-scheduler            3                   f6432df9cad10       kube-scheduler-functional-918162            kube-system
	52b06613ed5c2       43911e833d64d                                                                                         11 minutes ago      Running             kube-apiserver            0                   c465d354027b4       kube-apiserver-functional-918162            kube-system
	a30e7b60361c6       a1894772a478e                                                                                         11 minutes ago      Running             etcd                      2                   d815c16a842d5       etcd-functional-918162                      kube-system
	abde11914e6ea       7eb2c6ff0c5a7                                                                                         11 minutes ago      Running             kube-controller-manager   3                   6a50b52a20cd6       kube-controller-manager-functional-918162   kube-system
	df5883d0d3b50       7eb2c6ff0c5a7                                                                                         11 minutes ago      Exited              kube-controller-manager   2                   c8a8b02cc683f       kube-controller-manager-functional-918162   kube-system
	1431e6ebc1f99       b5f57ec6b9867                                                                                         11 minutes ago      Exited              kube-scheduler            2                   46e12ecced3e7       kube-scheduler-functional-918162            kube-system
	a84504ad24d3b       ba04bb24b9575                                                                                         11 minutes ago      Exited              storage-provisioner       2                   0c8fd21d93bbe       storage-provisioner                         kube-system
	25c22ed03f761       138784d87c9c5                                                                                         12 minutes ago      Exited              coredns                   1                   7a9813bd4e5b6       coredns-66bc5c9577-2jgzj                    kube-system
	e07efc2575b40       05baa95f5142d                                                                                         12 minutes ago      Exited              kube-proxy                1                   a099b3de4fd67       kube-proxy-t2m7t                            kube-system
	29f2f9d5bc204       a1894772a478e                                                                                         12 minutes ago      Exited              etcd                      1                   3b2a5fc281817       etcd-functional-918162                      kube-system
	
	
	==> coredns [25c22ed03f76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39922 - 11868 "HINFO IN 6236582767541961759.5503243279486802728. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017973416s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c3cab8740f11] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38338 - 37798 "HINFO IN 690274458111079266.5367955518433296417. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012338593s
	
	
	==> describe nodes <==
	Name:               functional-918162
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-918162
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-918162
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_23_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-918162
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:37:12 +0000   Sat, 18 Oct 2025 12:23:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:37:12 +0000   Sat, 18 Oct 2025 12:23:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:37:12 +0000   Sat, 18 Oct 2025 12:23:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:37:12 +0000   Sat, 18 Oct 2025 12:23:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-918162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ee586aaf-2843-40d5-9460-01d1c7207a1a
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-q8vg9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-rgr5j           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-2jgzj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-functional-918162                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kube-apiserver-functional-918162              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-918162     200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-t2m7t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-functional-918162              100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vngrm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sp7d7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node functional-918162 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node functional-918162 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node functional-918162 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeReady                14m                kubelet          Node functional-918162 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node functional-918162 event: Registered Node functional-918162 in Controller
	  Normal   NodeNotReady             13m                kubelet          Node functional-918162 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-918162 event: Registered Node functional-918162 in Controller
	  Warning  ContainerGCFailed        12m (x2 over 13m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-918162 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-918162 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-918162 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node functional-918162 event: Registered Node functional-918162 in Controller
	
	
	==> dmesg <==
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	[Oct18 10:38] overlayfs: idmapped layers are currently not supported
	[ +31.744006] overlayfs: idmapped layers are currently not supported
	[Oct18 10:40] overlayfs: idmapped layers are currently not supported
	[  +7.638461] overlayfs: idmapped layers are currently not supported
	[Oct18 10:42] overlayfs: idmapped layers are currently not supported
	[ +14.986088] overlayfs: idmapped layers are currently not supported
	[Oct18 10:44] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:59] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [29f2f9d5bc20] <==
	{"level":"warn","ts":"2025-10-18T12:24:35.012741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.082081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.128832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.155872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.183528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.221562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:35.337516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:25:16.829731Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:25:16.829798Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-918162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T12:25:16.829888Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:23.832554Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:25:23.832655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:25:23.832771Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:23.832905Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:25:23.832916Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:25:23.832836Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:25:23.832927Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-18T12:25:23.832963Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T12:25:23.832990Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:25:23.833006Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-18T12:25:23.832936Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:23.839683Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T12:25:23.839760Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:25:23.839791Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T12:25:23.839799Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-918162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a30e7b60361c] <==
	{"level":"warn","ts":"2025-10-18T12:25:37.535368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.555977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.571547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.589237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.608298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.626670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.663081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.685147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.709112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.729733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.741234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.768510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.792289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.817638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.826377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.842403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.857971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.875796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.913892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.955843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:37.981855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:38.042556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:35:36.704319Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1146}
	{"level":"info","ts":"2025-10-18T12:35:36.728157Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1146,"took":"23.454372ms","hash":13462522,"current-db-size-bytes":3719168,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1871872,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-18T12:35:36.728216Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":13462522,"revision":1146,"compact-revision":-1}
	
	
	==> kernel <==
	 12:37:26 up  4:19,  0 user,  load average: 0.08, 0.43, 0.90
	Linux functional-918162 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [52b06613ed5c] <==
	I1018 12:25:39.023705       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:25:39.024501       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:25:39.027362       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:25:39.027391       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:25:39.027398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:25:39.027405       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:25:39.044575       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:25:39.379134       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:25:39.737232       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:25:40.594897       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:25:40.640132       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:25:40.673651       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:25:40.681750       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:25:42.412881       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:25:42.612286       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:25:42.660266       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:25:56.710869       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.6.205"}
	I1018 12:26:03.098058       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.148.202"}
	I1018 12:26:11.810973       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.106.219"}
	E1018 12:26:21.160766       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57044: use of closed network connection
	I1018 12:26:28.447495       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.34.131"}
	I1018 12:32:26.165565       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:32:26.534838       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.28.224"}
	I1018 12:32:26.561790       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.219.18"}
	I1018 12:35:38.924702       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [abde11914e6e] <==
	I1018 12:25:42.302972       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:25:42.303019       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 12:25:42.303192       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 12:25:42.303379       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:25:42.303529       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:25:42.303761       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:25:42.303381       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:25:42.306464       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:25:42.313726       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 12:25:42.314026       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:25:42.323344       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:25:42.323373       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:25:42.323382       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:25:42.324042       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:25:42.326158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:25:42.328885       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:25:42.337384       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 12:32:26.271273       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.284846       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.317538       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.325460       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.359269       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.360010       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.381907       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:32:26.390379       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [df5883d0d3b5] <==
	
	
	==> kube-proxy [c33b253cbb0d] <==
	I1018 12:25:40.379805       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:25:40.475918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:25:40.577513       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:25:40.580209       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:25:40.580401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:25:40.624152       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:25:40.624731       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:25:40.630911       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:25:40.631609       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:25:40.632477       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:40.643191       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:25:40.643276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:25:40.644477       1 config.go:200] "Starting service config controller"
	I1018 12:25:40.649270       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:25:40.645961       1 config.go:309] "Starting node config controller"
	I1018 12:25:40.654443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:25:40.654643       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:25:40.645001       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:25:40.656124       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:25:40.750328       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:25:40.752368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:25:40.762047       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e07efc2575b4] <==
	I1018 12:24:36.165621       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:24:36.939888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:24:37.040408       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:24:37.040453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:24:37.040547       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:24:37.636252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:24:37.636314       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:24:37.861567       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:24:37.861863       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:24:37.861880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:24:37.863384       1 config.go:200] "Starting service config controller"
	I1018 12:24:37.869338       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:24:37.885099       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:24:37.885410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:24:37.885563       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:24:37.885655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:24:37.886281       1 config.go:309] "Starting node config controller"
	I1018 12:24:37.886444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:24:37.886554       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:24:37.976149       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:24:37.986574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:24:37.986891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1431e6ebc1f9] <==
	I1018 12:25:30.579535       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [222393d07f79] <==
	I1018 12:25:37.300544       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:25:38.857751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:25:38.857784       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:25:38.857794       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:25:38.857810       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:25:38.935072       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:25:38.935379       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:38.942191       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:25:38.943147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:38.945052       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:38.947439       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:25:39.063071       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:35:25 functional-918162 kubelet[9088]: E1018 12:35:25.654252    9088 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:35:25 functional-918162 kubelet[9088]: E1018 12:35:25.654316    9088 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 18 12:35:25 functional-918162 kubelet[9088]: E1018 12:35:25.654411    9088 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-sp7d7_kubernetes-dashboard(0552ff56-be8d-4834-b90b-1f7fe42b9ede): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:35:25 functional-918162 kubelet[9088]: E1018 12:35:25.654446    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:35:27 functional-918162 kubelet[9088]: E1018 12:35:27.541843    9088 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:35:27 functional-918162 kubelet[9088]: E1018 12:35:27.541936    9088 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 12:35:27 functional-918162 kubelet[9088]: E1018 12:35:27.542044    9088 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-vngrm_kubernetes-dashboard(5f76f100-2885-421c-8d9f-eb064f8860b7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:35:27 functional-918162 kubelet[9088]: E1018 12:35:27.543359    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:35:36 functional-918162 kubelet[9088]: E1018 12:35:36.406172    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:35:42 functional-918162 kubelet[9088]: E1018 12:35:42.406324    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:35:47 functional-918162 kubelet[9088]: E1018 12:35:47.413947    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:35:57 functional-918162 kubelet[9088]: E1018 12:35:57.407343    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:35:58 functional-918162 kubelet[9088]: E1018 12:35:58.406785    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:36:10 functional-918162 kubelet[9088]: E1018 12:36:10.406110    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:36:11 functional-918162 kubelet[9088]: E1018 12:36:11.406208    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:36:23 functional-918162 kubelet[9088]: E1018 12:36:23.412729    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:36:25 functional-918162 kubelet[9088]: E1018 12:36:25.408215    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:36:35 functional-918162 kubelet[9088]: E1018 12:36:35.406421    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:36:37 functional-918162 kubelet[9088]: E1018 12:36:37.408379    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:36:48 functional-918162 kubelet[9088]: E1018 12:36:48.406478    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:36:52 functional-918162 kubelet[9088]: E1018 12:36:52.405723    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:37:03 functional-918162 kubelet[9088]: E1018 12:37:03.409890    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:37:07 functional-918162 kubelet[9088]: E1018 12:37:07.413049    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	Oct 18 12:37:18 functional-918162 kubelet[9088]: E1018 12:37:18.406834    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sp7d7" podUID="0552ff56-be8d-4834-b90b-1f7fe42b9ede"
	Oct 18 12:37:22 functional-918162 kubelet[9088]: E1018 12:37:22.406377    9088 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vngrm" podUID="5f76f100-2885-421c-8d9f-eb064f8860b7"
	
	
	==> storage-provisioner [110911358a41] <==
	W1018 12:37:00.729329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:02.732999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:02.739716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:04.743520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:04.748407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:06.752450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:06.759197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:08.763055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:08.768037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:10.772593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:10.777330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:12.780752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:12.787203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:14.790383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:14.794773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:16.797835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:16.804496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:18.807946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:18.812944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:20.815907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:20.822487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:22.825427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:22.832048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:24.835265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:37:24.839875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a84504ad24d3] <==
	I1018 12:25:29.267690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:25:29.269050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-918162 -n functional-918162
helpers_test.go:269: (dbg) Run:  kubectl --context functional-918162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-vngrm kubernetes-dashboard-855c9754f9-sp7d7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-918162 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-vngrm kubernetes-dashboard-855c9754f9-sp7d7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-918162 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-vngrm kubernetes-dashboard-855c9754f9-sp7d7: exit status 1 (92.938954ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-918162/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:32:24 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://6d96581b085f2e162ed02b525a556c65e1ce839e24215132767ef937a8fce7e0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 12:32:27 +0000
	      Finished:     Sat, 18 Oct 2025 12:32:27 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hb4pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hb4pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m3s  default-scheduler  Successfully assigned default/busybox-mount to functional-918162
	  Normal  Pulling    5m3s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.231s (2.231s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5m    kubelet            Created container: mount-munger
	  Normal  Started    5m    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vngrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sp7d7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-918162 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-vngrm kubernetes-dashboard-855c9754f9-sp7d7: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.25s)

                                                
                                    

Test pass (316/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.43
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.99
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.17
18 TestDownloadOnly/v1.34.1/DeleteAll 0.38
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.62
22 TestOffline 62.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 188.51
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 15.83
36 TestAddons/parallel/RegistryCreds 0.74
38 TestAddons/parallel/InspektorGadget 5.22
39 TestAddons/parallel/MetricsServer 5.92
42 TestAddons/parallel/Headlamp 17.16
43 TestAddons/parallel/CloudSpanner 6.58
45 TestAddons/parallel/NvidiaDevicePlugin 6.6
46 TestAddons/parallel/Yakd 11.72
48 TestAddons/StoppedEnableDisable 11.49
49 TestCertOptions 47.12
50 TestCertExpiration 277.57
51 TestDockerFlags 44.66
52 TestForceSystemdFlag 47.5
53 TestForceSystemdEnv 47.06
59 TestErrorSpam/setup 33.48
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.05
62 TestErrorSpam/pause 1.59
63 TestErrorSpam/unpause 1.69
64 TestErrorSpam/stop 11.24
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 74.15
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 56.09
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.9
76 TestFunctional/serial/CacheCmd/cache/add_local 1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 56.38
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.25
87 TestFunctional/serial/LogsFileCmd 1.23
88 TestFunctional/serial/InvalidService 5.18
90 TestFunctional/parallel/ConfigCmd 0.46
92 TestFunctional/parallel/DryRun 0.5
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.39
98 TestFunctional/parallel/ServiceCmdConnect 369.78
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 25.75
102 TestFunctional/parallel/SSHCmd 0.87
103 TestFunctional/parallel/CpCmd 1.75
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.79
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
114 TestFunctional/parallel/License 0.29
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.38
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 351.21
127 TestFunctional/parallel/ServiceCmd/List 0.53
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
130 TestFunctional/parallel/ServiceCmd/Format 0.54
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
132 TestFunctional/parallel/ServiceCmd/URL 0.51
133 TestFunctional/parallel/ProfileCmd/profile_list 0.7
134 TestFunctional/parallel/MountCmd/any-port 8.55
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
136 TestFunctional/parallel/MountCmd/specific-port 1.74
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.13
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
145 TestFunctional/parallel/ImageCommands/Setup 0.6
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
153 TestFunctional/parallel/DockerEnv/bash 1.09
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
157 TestFunctional/delete_echo-server_images 0.05
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 167.14
165 TestMultiControlPlane/serial/DeployApp 8.36
166 TestMultiControlPlane/serial/PingHostFromPods 1.67
167 TestMultiControlPlane/serial/AddWorkerNode 35.69
168 TestMultiControlPlane/serial/NodeLabels 0.11
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.12
170 TestMultiControlPlane/serial/CopyFile 20.53
171 TestMultiControlPlane/serial/StopSecondaryNode 12.2
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
173 TestMultiControlPlane/serial/RestartSecondaryNode 46.62
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 245.22
176 TestMultiControlPlane/serial/DeleteSecondaryNode 11.5
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.85
178 TestMultiControlPlane/serial/StopCluster 33.65
179 TestMultiControlPlane/serial/RestartCluster 116.71
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
181 TestMultiControlPlane/serial/AddSecondaryNode 68.01
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestImageBuild/serial/Setup 32.21
186 TestImageBuild/serial/NormalBuild 1.68
187 TestImageBuild/serial/BuildWithBuildArg 0.88
188 TestImageBuild/serial/BuildWithDockerIgnore 0.86
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
193 TestJSONOutput/start/Command 78.9
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.69
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.56
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 6.01
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 37.06
219 TestKicCustomNetwork/use_default_bridge_network 36.75
220 TestKicExistingNetwork 36.66
221 TestKicCustomSubnet 35.38
222 TestKicStaticIP 37.83
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 74.23
227 TestMountStart/serial/StartWithMountFirst 10.62
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 8.73
230 TestMountStart/serial/VerifyMountSecond 0.27
231 TestMountStart/serial/DeleteFirst 1.57
232 TestMountStart/serial/VerifyMountPostDelete 0.29
233 TestMountStart/serial/Stop 1.29
234 TestMountStart/serial/RestartStopped 8.86
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 92.65
239 TestMultiNode/serial/DeployApp2Nodes 5.77
240 TestMultiNode/serial/PingHostFrom2Pods 0.96
241 TestMultiNode/serial/AddNode 34.78
242 TestMultiNode/serial/MultiNodeLabels 0.1
243 TestMultiNode/serial/ProfileList 0.88
244 TestMultiNode/serial/CopyFile 10.31
245 TestMultiNode/serial/StopNode 2.42
246 TestMultiNode/serial/StartAfterStop 9.81
247 TestMultiNode/serial/RestartKeepsNodes 74.99
248 TestMultiNode/serial/DeleteNode 5.77
249 TestMultiNode/serial/StopMultiNode 22.18
250 TestMultiNode/serial/RestartMultiNode 52.06
251 TestMultiNode/serial/ValidateNameConflict 36.13
256 TestPreload 119.08
258 TestScheduledStopUnix 111.04
259 TestSkaffold 145.22
261 TestInsufficientStorage 14.05
262 TestRunningBinaryUpgrade 89.02
264 TestKubernetesUpgrade 224.01
265 TestMissingContainerUpgrade 86.04
277 TestStoppedBinaryUpgrade/Setup 0.85
278 TestStoppedBinaryUpgrade/Upgrade 74.74
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
281 TestPause/serial/Start 80.65
282 TestPause/serial/SecondStartNoReconfiguration 53.11
291 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
292 TestNoKubernetes/serial/StartWithK8s 40.46
293 TestPause/serial/Pause 1
294 TestPause/serial/VerifyStatus 0.56
295 TestPause/serial/Unpause 0.9
296 TestPause/serial/PauseAgain 1.23
297 TestNoKubernetes/serial/StartWithStopK8s 19.47
298 TestPause/serial/DeletePaused 2.99
299 TestPause/serial/VerifyDeletedResources 0.66
300 TestNetworkPlugins/group/auto/Start 83.92
301 TestNoKubernetes/serial/Start 12.39
302 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
303 TestNoKubernetes/serial/ProfileList 1.28
304 TestNoKubernetes/serial/Stop 1.36
305 TestNoKubernetes/serial/StartNoArgs 8.81
306 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
307 TestNetworkPlugins/group/calico/Start 69.17
308 TestNetworkPlugins/group/auto/KubeletFlags 0.45
309 TestNetworkPlugins/group/auto/NetCatPod 11.38
310 TestNetworkPlugins/group/auto/DNS 0.38
311 TestNetworkPlugins/group/auto/Localhost 0.28
312 TestNetworkPlugins/group/auto/HairPin 0.25
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/calico/KubeletFlags 0.43
315 TestNetworkPlugins/group/calico/NetCatPod 11.35
316 TestNetworkPlugins/group/custom-flannel/Start 58.47
317 TestNetworkPlugins/group/calico/DNS 0.42
318 TestNetworkPlugins/group/calico/Localhost 0.25
319 TestNetworkPlugins/group/calico/HairPin 0.17
320 TestNetworkPlugins/group/false/Start 79.38
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
323 TestNetworkPlugins/group/custom-flannel/DNS 0.24
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
326 TestNetworkPlugins/group/kindnet/Start 58.35
327 TestNetworkPlugins/group/false/KubeletFlags 0.4
328 TestNetworkPlugins/group/false/NetCatPod 11.32
329 TestNetworkPlugins/group/false/DNS 0.25
330 TestNetworkPlugins/group/false/Localhost 0.22
331 TestNetworkPlugins/group/false/HairPin 0.19
332 TestNetworkPlugins/group/flannel/Start 63.59
333 TestNetworkPlugins/group/kindnet/ControllerPod 6
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
335 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
336 TestNetworkPlugins/group/kindnet/DNS 0.31
337 TestNetworkPlugins/group/kindnet/Localhost 0.18
338 TestNetworkPlugins/group/kindnet/HairPin 0.18
339 TestNetworkPlugins/group/enable-default-cni/Start 50.48
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
342 TestNetworkPlugins/group/flannel/NetCatPod 11.35
343 TestNetworkPlugins/group/flannel/DNS 0.26
344 TestNetworkPlugins/group/flannel/Localhost 0.19
345 TestNetworkPlugins/group/flannel/HairPin 0.23
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.55
348 TestNetworkPlugins/group/bridge/Start 56.35
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
352 TestNetworkPlugins/group/kubenet/Start 80.4
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.49
354 TestNetworkPlugins/group/bridge/NetCatPod 10.36
355 TestNetworkPlugins/group/bridge/DNS 0.21
356 TestNetworkPlugins/group/bridge/Localhost 0.21
357 TestNetworkPlugins/group/bridge/HairPin 0.2
359 TestStartStop/group/old-k8s-version/serial/FirstStart 91.49
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
361 TestNetworkPlugins/group/kubenet/NetCatPod 12.36
362 TestNetworkPlugins/group/kubenet/DNS 0.27
363 TestNetworkPlugins/group/kubenet/Localhost 0.24
364 TestNetworkPlugins/group/kubenet/HairPin 0.33
366 TestStartStop/group/no-preload/serial/FirstStart 85.91
367 TestStartStop/group/old-k8s-version/serial/DeployApp 10.59
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.89
369 TestStartStop/group/old-k8s-version/serial/Stop 11.5
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/old-k8s-version/serial/SecondStart 30.79
372 TestStartStop/group/no-preload/serial/DeployApp 10.4
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11
374 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
375 TestStartStop/group/no-preload/serial/Stop 11.48
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
378 TestStartStop/group/old-k8s-version/serial/Pause 3.97
379 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
380 TestStartStop/group/no-preload/serial/SecondStart 60.2
382 TestStartStop/group/embed-certs/serial/FirstStart 80.12
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
386 TestStartStop/group/no-preload/serial/Pause 3.17
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.16
389 TestStartStop/group/embed-certs/serial/DeployApp 9.4
390 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
391 TestStartStop/group/embed-certs/serial/Stop 11.64
392 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
393 TestStartStop/group/embed-certs/serial/SecondStart 55.54
394 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
397 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
398 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.29
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
400 TestStartStop/group/embed-certs/serial/Pause 3.01
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.87
404 TestStartStop/group/newest-cni/serial/FirstStart 50
405 TestStartStop/group/newest-cni/serial/DeployApp 0
406 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
407 TestStartStop/group/newest-cni/serial/Stop 11.24
408 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
410 TestStartStop/group/newest-cni/serial/SecondStart 21.99
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.75
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
417 TestStartStop/group/newest-cni/serial/Pause 3.03
x
+
TestDownloadOnly/v1.28.0/json-events (5.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.428595304s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 11:59:37.349994  837622 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1018 11:59:37.350078  837622 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-809137
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-809137: exit status 85 (80.594105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-809137 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:31.966062  837628 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:31.966259  837628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:31.966287  837628 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:31.966308  837628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:31.966596  837628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	W1018 11:59:31.966774  837628 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-835760/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-835760/.minikube/config/config.json: no such file or directory
	I1018 11:59:31.967243  837628 out.go:368] Setting JSON to true
	I1018 11:59:31.968139  837628 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13322,"bootTime":1760775450,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:31.968238  837628 start.go:141] virtualization:  
	I1018 11:59:31.972311  837628 out.go:99] [download-only-809137] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 11:59:31.972476  837628 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 11:59:31.972535  837628 notify.go:220] Checking for updates...
	I1018 11:59:31.975354  837628 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:59:31.978441  837628 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:31.981351  837628 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:31.984199  837628 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:31.986913  837628 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 11:59:31.992630  837628 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:59:31.992934  837628 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:32.027174  837628 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:32.027316  837628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:32.088522  837628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 11:59:32.078928935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:32.088633  837628 docker.go:318] overlay module found
	I1018 11:59:32.091718  837628 out.go:99] Using the docker driver based on user configuration
	I1018 11:59:32.091770  837628 start.go:305] selected driver: docker
	I1018 11:59:32.091777  837628 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:32.091890  837628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:32.146269  837628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 11:59:32.137378902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:32.146440  837628 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:32.146726  837628 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 11:59:32.146881  837628 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:59:32.149921  837628 out.go:171] Using Docker driver with root privileges
	I1018 11:59:32.152930  837628 cni.go:84] Creating CNI manager for ""
	I1018 11:59:32.153010  837628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1018 11:59:32.153026  837628 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 11:59:32.153105  837628 start.go:349] cluster config:
	{Name:download-only-809137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-809137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:59:32.156282  837628 out.go:99] Starting "download-only-809137" primary control-plane node in "download-only-809137" cluster
	I1018 11:59:32.156320  837628 cache.go:123] Beginning downloading kic base image for docker with docker
	I1018 11:59:32.159080  837628 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:59:32.159139  837628 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1018 11:59:32.159209  837628 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:59:32.174728  837628 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:32.174923  837628 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:59:32.175023  837628 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:59:32.216106  837628 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1018 11:59:32.216161  837628 cache.go:58] Caching tarball of preloaded images
	I1018 11:59:32.216334  837628 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1018 11:59:32.219569  837628 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 11:59:32.219638  837628 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1018 11:59:32.302920  837628 preload.go:290] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1018 11:59:32.303069  837628 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1018 11:59:35.423971  837628 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1018 11:59:35.424460  837628 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/download-only-809137/config.json ...
	I1018 11:59:35.424524  837628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/download-only-809137/config.json: {Name:mka41a936e1c04ab7d61a6d3fa9d5f51d4ea0082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:59:35.425417  837628 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1018 11:59:35.426246  837628 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21647-835760/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-809137 host does not exist
	  To start a cluster, run: "minikube start -p download-only-809137"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-809137
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.993591001s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 11:59:41.783207  837622 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1018 11:59:41.783251  837622 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-835760/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-223621
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-223621: exit status 85 (167.976569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-809137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-809137 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ delete  │ -p download-only-809137                                                                                                                                                       │ download-only-809137 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │ 18 Oct 25 11:59 UTC │
	│ start   │ -o=json --download-only -p download-only-223621 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-223621 │ jenkins │ v1.37.0 │ 18 Oct 25 11:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:59:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:59:37.833016  837827 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:59:37.833154  837827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:37.833217  837827 out.go:374] Setting ErrFile to fd 2...
	I1018 11:59:37.833239  837827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:59:37.833802  837827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 11:59:37.834278  837827 out.go:368] Setting JSON to true
	I1018 11:59:37.835160  837827 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13328,"bootTime":1760775450,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 11:59:37.835265  837827 start.go:141] virtualization:  
	I1018 11:59:37.838594  837827 out.go:99] [download-only-223621] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 11:59:37.838831  837827 notify.go:220] Checking for updates...
	I1018 11:59:37.841914  837827 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:59:37.846784  837827 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:59:37.851042  837827 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 11:59:37.854411  837827 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 11:59:37.856884  837827 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 11:59:37.861789  837827 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:59:37.862056  837827 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:59:37.883680  837827 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 11:59:37.883786  837827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:37.943804  837827 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 11:59:37.934244875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:37.943910  837827 docker.go:318] overlay module found
	I1018 11:59:37.946482  837827 out.go:99] Using the docker driver based on user configuration
	I1018 11:59:37.946536  837827 start.go:305] selected driver: docker
	I1018 11:59:37.946553  837827 start.go:925] validating driver "docker" against <nil>
	I1018 11:59:37.946666  837827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:59:37.998099  837827 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 11:59:37.988748224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 11:59:37.998319  837827 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:59:37.998718  837827 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 11:59:37.998950  837827 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:59:38.001735  837827 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-223621 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223621"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-223621
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 11:59:43.656492  837622 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-415208 --alsologtostderr --binary-mirror http://127.0.0.1:32775 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-415208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-415208
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (62.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-639725 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-639725 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m0.292069345s)
helpers_test.go:175: Cleaning up "offline-docker-639725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-639725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-639725: (2.340660652s)
--- PASS: TestOffline (62.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-481299
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-481299: exit status 85 (73.067018ms)

                                                
                                                
-- stdout --
	* Profile "addons-481299" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-481299"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-481299
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-481299: exit status 85 (77.90666ms)

                                                
                                                
-- stdout --
	* Profile "addons-481299" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-481299"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (188.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-481299 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m8.507206324s)
--- PASS: TestAddons/Setup (188.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-481299 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-481299 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-481299 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-481299 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [35740029-fd04-4eba-94be-aa1220877861] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [35740029-fd04-4eba-94be-aa1220877861] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003386317s
addons_test.go:694: (dbg) Run:  kubectl --context addons-481299 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-481299 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-481299 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-481299 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.456962ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zqqmr" [c46c8acd-add1-4095-94df-928039577436] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00367427s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mx9vb" [eadf782d-aa53-44c9-b43e-0d18be63465e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004118669s
addons_test.go:392: (dbg) Run:  kubectl --context addons-481299 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-481299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-481299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.473941657s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 ip
2025/10/18 12:06:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable registry --alsologtostderr -v=1: (1.105373226s)
--- PASS: TestAddons/parallel/Registry (15.83s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.926615ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-481299
addons_test.go:332: (dbg) Run:  kubectl --context addons-481299 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fg7d4" [66f867ab-c593-42f7-9985-7e1515596d17] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004371652s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.178152ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7dj5n" [fcd564d8-f2d0-489e-ab65-63cf29289457] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003744737s
addons_test.go:463: (dbg) Run:  kubectl --context addons-481299 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-481299 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-481299 --alsologtostderr -v=1: (1.1785164s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-cbpft" [a08dc186-c164-455b-b70c-643689b2406d] Pending
helpers_test.go:352: "headlamp-6945c6f4d-cbpft" [a08dc186-c164-455b-b70c-643689b2406d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-cbpft" [a08dc186-c164-455b-b70c-643689b2406d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004277609s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable headlamp --alsologtostderr -v=1: (5.977718279s)
--- PASS: TestAddons/parallel/Headlamp (17.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-dzsz8" [ffd4842c-09b4-4973-9866-1ce0d5dbd809] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003546887s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j66tb" [d5658d0f-31b9-48e5-8616-fba6dda18580] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003777684s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nxvml" [02dc387f-2368-411b-b5f3-65072f6fdae6] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003228999s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-481299 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-481299 addons disable yakd --alsologtostderr -v=1: (5.716527932s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-481299
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-481299: (11.219162386s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-481299
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-481299
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-481299
--- PASS: TestAddons/StoppedEnableDisable (11.49s)

                                                
                                    
x
+
TestCertOptions (47.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-733575 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-733575 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (43.741154144s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-733575 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-733575 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-733575 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-733575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-733575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-733575: (2.344311321s)
--- PASS: TestCertOptions (47.12s)

                                                
                                    
x
+
TestCertExpiration (277.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-567539 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-567539 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (47.395388599s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-567539 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-567539 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (47.761741548s)
helpers_test.go:175: Cleaning up "cert-expiration-567539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-567539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-567539: (2.410628948s)
--- PASS: TestCertExpiration (277.57s)

                                                
                                    
x
+
TestDockerFlags (44.66s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-602618 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1018 13:11:02.494638  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-602618 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.17241621s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-602618 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-602618 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-602618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-602618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-602618: (2.648494625s)
--- PASS: TestDockerFlags (44.66s)

                                                
                                    
x
+
TestForceSystemdFlag (47.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-050332 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-050332 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.266297283s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-050332 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-050332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-050332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-050332: (2.704840661s)
--- PASS: TestForceSystemdFlag (47.50s)

                                                
                                    
x
+
TestForceSystemdEnv (47.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-882002 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-882002 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.059394317s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-882002 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-882002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-882002
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-882002: (2.384615139s)
--- PASS: TestForceSystemdEnv (47.06s)

                                                
                                    
x
+
TestErrorSpam/setup (33.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-864894 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-864894 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-864894 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-864894 --driver=docker  --container-runtime=docker: (33.477415958s)
--- PASS: TestErrorSpam/setup (33.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (11.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 stop: (11.027679141s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-864894 --log_dir /tmp/nospam-864894 stop
--- PASS: TestErrorSpam/stop (11.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-835760/.minikube/files/etc/test/nested/copy/837622/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1018 12:22:52.908229  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:52.914620  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:52.925987  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:52.947328  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:52.988679  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:53.070098  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:53.231708  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:53.553380  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:54.195358  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:55.476698  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:22:58.038021  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:03.159672  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:13.401576  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:33.882898  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-918162 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m14.152325464s)
--- PASS: TestFunctional/serial/StartWithProxy (74.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 12:23:54.974919  837622 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --alsologtostderr -v=8
E1018 12:24:14.845419  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-918162 --alsologtostderr -v=8: (56.088462823s)
functional_test.go:678: soft start took 56.090880168s for "functional-918162" cluster.
I1018 12:24:51.063724  837622 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (56.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-918162 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-918162 /tmp/TestFunctionalserialCacheCmdcacheadd_local1025522988/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache add minikube-local-cache-test:functional-918162
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache delete minikube-local-cache-test:functional-918162
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-918162
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.395099ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 kubectl -- --context functional-918162 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-918162 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 12:25:36.767022  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-918162 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.376821225s)
functional_test.go:776: restart took 56.376926367s for "functional-918162" cluster.
I1018 12:25:53.868589  837622 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (56.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-918162 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-918162 logs: (1.247525711s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 logs --file /tmp/TestFunctionalserialLogsFileCmd343534291/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-918162 logs --file /tmp/TestFunctionalserialLogsFileCmd343534291/001/logs.txt: (1.226143994s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-918162 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-918162
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-918162: exit status 115 (1.042815142s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30577 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-918162 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 config get cpus: exit status 14 (78.00735ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 config get cpus: exit status 14 (79.064382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918162 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (205.984906ms)

                                                
                                                
-- stdout --
	* [functional-918162] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:32:24.675566  893372 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:32:24.675745  893372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.675776  893372 out.go:374] Setting ErrFile to fd 2...
	I1018 12:32:24.675799  893372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.676052  893372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:32:24.676447  893372 out.go:368] Setting JSON to false
	I1018 12:32:24.677855  893372 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15295,"bootTime":1760775450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 12:32:24.677958  893372 start.go:141] virtualization:  
	I1018 12:32:24.681273  893372 out.go:179] * [functional-918162] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:32:24.684322  893372 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:32:24.684389  893372 notify.go:220] Checking for updates...
	I1018 12:32:24.695323  893372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:32:24.698422  893372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:32:24.701282  893372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 12:32:24.705083  893372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:32:24.708710  893372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:32:24.712766  893372 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:32:24.713361  893372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:32:24.735959  893372 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:32:24.736083  893372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:32:24.813308  893372 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:32:24.803177996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:32:24.813426  893372 docker.go:318] overlay module found
	I1018 12:32:24.817076  893372 out.go:179] * Using the docker driver based on existing profile
	I1018 12:32:24.820083  893372 start.go:305] selected driver: docker
	I1018 12:32:24.820108  893372 start.go:925] validating driver "docker" against &{Name:functional-918162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-918162 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:32:24.820246  893372 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:32:24.823824  893372 out.go:203] 
	W1018 12:32:24.826854  893372 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 12:32:24.829844  893372 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918162 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918162 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (213.569861ms)

                                                
                                                
-- stdout --
	* [functional-918162] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:32:24.475497  893325 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:32:24.475700  893325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.475726  893325 out.go:374] Setting ErrFile to fd 2...
	I1018 12:32:24.475744  893325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:32:24.477378  893325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:32:24.477832  893325 out.go:368] Setting JSON to false
	I1018 12:32:24.478948  893325 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15295,"bootTime":1760775450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 12:32:24.479032  893325 start.go:141] virtualization:  
	I1018 12:32:24.484452  893325 out.go:179] * [functional-918162] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:32:24.487742  893325 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:32:24.487831  893325 notify.go:220] Checking for updates...
	I1018 12:32:24.493847  893325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:32:24.496730  893325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	I1018 12:32:24.499610  893325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	I1018 12:32:24.502473  893325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:32:24.505308  893325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:32:24.508795  893325 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:32:24.509528  893325 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:32:24.537305  893325 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:32:24.537444  893325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:32:24.608087  893325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:32:24.598611349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:32:24.608192  893325 docker.go:318] overlay module found
	I1018 12:32:24.611500  893325 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:32:24.614310  893325 start.go:305] selected driver: docker
	I1018 12:32:24.614330  893325 start.go:925] validating driver "docker" against &{Name:functional-918162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-918162 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:32:24.614442  893325 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:32:24.617912  893325 out.go:203] 
	W1018 12:32:24.620748  893325 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:32:24.623555  893325 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (369.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-918162 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-918162 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rgr5j" [5e2d9ee5-4cd5-407d-9d68-6ec98d55f813] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-rgr5j" [5e2d9ee5-4cd5-407d-9d68-6ec98d55f813] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6m9.003822568s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31390
functional_test.go:1680: http://192.168.49.2:31390: success! body:
Request served by hello-node-connect-7d85dfc575-rgr5j

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31390
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (369.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b67518e2-93a7-470b-849d-0e87e7e23194] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004011585s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-918162 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-918162 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-918162 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-918162 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e5277f05-9ae8-4aa1-9cff-6fe1407209f5] Pending
helpers_test.go:352: "sp-pod" [e5277f05-9ae8-4aa1-9cff-6fe1407209f5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e5277f05-9ae8-4aa1-9cff-6fe1407209f5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007607043s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-918162 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-918162 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-918162 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [175e3aca-b569-410d-a8e4-f00923a5080a] Pending
helpers_test.go:352: "sp-pod" [175e3aca-b569-410d-a8e4-f00923a5080a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003612296s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-918162 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh -n functional-918162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cp functional-918162:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1393531691/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh -n functional-918162 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh -n functional-918162 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/837622/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /etc/test/nested/copy/837622/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/837622.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /etc/ssl/certs/837622.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/837622.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /usr/share/ca-certificates/837622.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8376222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /etc/ssl/certs/8376222.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8376222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /usr/share/ca-certificates/8376222.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-918162 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh "sudo systemctl is-active crio": exit status 1 (278.217136ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 889526: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-918162 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4e68ecfa-41ea-46d8-a867-97d4a43190cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4e68ecfa-41ea-46d8-a867-97d4a43190cd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004745414s
I1018 12:26:11.110605  837622 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-918162 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.148.202 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-918162 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (351.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-918162 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-918162 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-q8vg9" [078fb3d1-6a08-460b-a95b-c7bc7c89b696] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 12:27:52.905409  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:20.610710  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-75c85bcc94-q8vg9" [078fb3d1-6a08-460b-a95b-c7bc7c89b696] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 5m51.003516458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (351.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service list -o json
functional_test.go:1504: Took "551.31631ms" to run "out/minikube-linux-arm64 -p functional-918162 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31544
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31544
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "628.453205ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "75.769425ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdany-port618796827/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760790741999792739" to /tmp/TestFunctionalparallelMountCmdany-port618796827/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760790741999792739" to /tmp/TestFunctionalparallelMountCmdany-port618796827/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760790741999792739" to /tmp/TestFunctionalparallelMountCmdany-port618796827/001/test-1760790741999792739
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (469.950075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:32:22.472457  837622 retry.go:31] will retry after 511.323455ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 12:32 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 12:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 12:32 test-1760790741999792739
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh cat /mount-9p/test-1760790741999792739
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-918162 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [494ea498-c167-426a-aab9-6dec1bd6f8ed] Pending
helpers_test.go:352: "busybox-mount" [494ea498-c167-426a-aab9-6dec1bd6f8ed] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [494ea498-c167-426a-aab9-6dec1bd6f8ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [494ea498-c167-426a-aab9-6dec1bd6f8ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003085979s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-918162 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdany-port618796827/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "368.106307ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "68.462472ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdspecific-port1817395392/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (412.628381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:32:30.957022  837622 retry.go:31] will retry after 283.122247ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdspecific-port1817395392/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh "sudo umount -f /mount-9p": exit status 1 (273.568654ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-918162 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdspecific-port1817395392/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-918162 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918162 /tmp/TestFunctionalparallelMountCmdVerifyCleanup298161311/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-918162 version -o=json --components: (1.131997678s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918162 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-918162
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-918162
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918162 image ls --format short --alsologtostderr:
I1018 12:32:45.544922  896899 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:45.545036  896899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:45.545045  896899 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:45.545051  896899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:45.545380  896899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:45.546002  896899 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:45.546124  896899 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:45.546599  896899 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:45.564573  896899 ssh_runner.go:195] Run: systemctl --version
I1018 12:32:45.564627  896899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:45.583693  896899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:45.687775  896899 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918162 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-918162 │ b0be61d3b28db │ 30B    │
│ docker.io/library/nginx                     │ latest            │ e35ad067421cc │ 180MB  │
│ docker.io/library/nginx                     │ alpine            │ 9c92f55c0336c │ 53.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ 43911e833d64d │ 83.7MB │
│ docker.io/kicbase/echo-server               │ functional-918162 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ localhost/my-image                          │ functional-918162 │ 1a77b4c2565da │ 1.41MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ 7eb2c6ff0c5a7 │ 71.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ b5f57ec6b9867 │ 50.5MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ 05baa95f5142d │ 74.7MB │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918162 image ls --format table --alsologtostderr:
I1018 12:32:49.883180  897253 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:49.883379  897253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:49.883392  897253 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:49.883397  897253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:49.883667  897253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:49.884299  897253 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:49.884461  897253 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:49.884995  897253 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:49.902378  897253 ssh_runner.go:195] Run: systemctl --version
I1018 12:32:49.902440  897253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:49.919721  897253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:50.026591  897253 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918162 image ls --format json --alsologtostderr:
[{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"53300000"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"83700000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-918162","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b0be61d3b28db5b77c0ca48cff05336041ee082c83902ae0f2ddd4cb005ff6a0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-918162"],"size":"30"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"si
ze":"71500000"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"74700000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"1a77b4c2565dad4ec6d666882fb213b1cc27cbb26229b8be21b111cffff48ac9","repoDigests":[],"repoTags":["localhost/my-image:functional-918162"],"size":"1410000"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"180000000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3b
abbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"50500000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918162 image ls --format json --alsologtostderr:
I1018 12:32:49.663154  897216 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:49.663300  897216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:49.663324  897216 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:49.663345  897216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:49.663616  897216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:49.664272  897216 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:49.664434  897216 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:49.664968  897216 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:49.682178  897216 ssh_runner.go:195] Run: systemctl --version
I1018 12:32:49.682248  897216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:49.702388  897216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:49.803989  897216 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918162 image ls --format yaml --alsologtostderr:
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "180000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "71500000"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "74700000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "50500000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-918162
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: b0be61d3b28db5b77c0ca48cff05336041ee082c83902ae0f2ddd4cb005ff6a0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-918162
size: "30"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "83700000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "53300000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918162 image ls --format yaml --alsologtostderr:
I1018 12:32:45.779443  896935 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:45.779610  896935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:45.779619  896935 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:45.779625  896935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:45.779893  896935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:45.780490  896935 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:45.780608  896935 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:45.781052  896935 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:45.798534  896935 ssh_runner.go:195] Run: systemctl --version
I1018 12:32:45.798592  896935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:45.815379  896935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:45.920077  896935 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918162 ssh pgrep buildkitd: exit status 1 (281.377358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image build -t localhost/my-image:functional-918162 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-918162 image build -t localhost/my-image:functional-918162 testdata/build --alsologtostderr: (3.142000067s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918162 image build -t localhost/my-image:functional-918162 testdata/build --alsologtostderr:
I1018 12:32:46.293885  897034 out.go:360] Setting OutFile to fd 1 ...
I1018 12:32:46.294748  897034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:46.294789  897034 out.go:374] Setting ErrFile to fd 2...
I1018 12:32:46.294812  897034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:32:46.295097  897034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
I1018 12:32:46.295789  897034 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:46.297868  897034 config.go:182] Loaded profile config "functional-918162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:32:46.298353  897034 cli_runner.go:164] Run: docker container inspect functional-918162 --format={{.State.Status}}
I1018 12:32:46.317277  897034 ssh_runner.go:195] Run: systemctl --version
I1018 12:32:46.317335  897034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918162
I1018 12:32:46.334379  897034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/functional-918162/id_rsa Username:docker}
I1018 12:32:46.435945  897034 build_images.go:161] Building image from path: /tmp/build.3201533702.tar
I1018 12:32:46.436016  897034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 12:32:46.443938  897034 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3201533702.tar
I1018 12:32:46.447429  897034 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3201533702.tar: stat -c "%s %y" /var/lib/minikube/build/build.3201533702.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3201533702.tar': No such file or directory
I1018 12:32:46.447462  897034 ssh_runner.go:362] scp /tmp/build.3201533702.tar --> /var/lib/minikube/build/build.3201533702.tar (3072 bytes)
I1018 12:32:46.465382  897034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3201533702
I1018 12:32:46.473455  897034 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3201533702 -xf /var/lib/minikube/build/build.3201533702.tar
I1018 12:32:46.481428  897034 docker.go:361] Building image: /var/lib/minikube/build/build.3201533702
I1018 12:32:46.481520  897034 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-918162 /var/lib/minikube/build/build.3201533702
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:1a77b4c2565dad4ec6d666882fb213b1cc27cbb26229b8be21b111cffff48ac9 done
#8 naming to localhost/my-image:functional-918162 done
#8 DONE 0.1s
I1018 12:32:49.352904  897034 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-918162 /var/lib/minikube/build/build.3201533702: (2.871354249s)
I1018 12:32:49.352995  897034 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3201533702
I1018 12:32:49.361289  897034 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3201533702.tar
I1018 12:32:49.369299  897034 build_images.go:217] Built localhost/my-image:functional-918162 from /tmp/build.3201533702.tar
I1018 12:32:49.369330  897034 build_images.go:133] succeeded building to: functional-918162
I1018 12:32:49.369336  897034 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-918162
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image load --daemon kicbase/echo-server:functional-918162 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image load --daemon kicbase/echo-server:functional-918162 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-918162
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image load --daemon kicbase/echo-server:functional-918162 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image save kicbase/echo-server:functional-918162 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image rm kicbase/echo-server:functional-918162 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-918162
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 image save --daemon kicbase/echo-server:functional-918162 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-918162
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918162 docker-env) && out/minikube-linux-arm64 status -p functional-918162"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918162 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-918162 update-context --alsologtostderr -v=2
E1018 12:32:52.904844  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-918162
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-918162
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-918162
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (167.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1018 12:37:52.904480  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:39:15.972123  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m46.233814402s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (167.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 kubectl -- rollout status deployment/busybox: (5.054858103s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-dw4ql -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-kpxtp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-swqct -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-dw4ql -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-kpxtp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-swqct -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-dw4ql -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-kpxtp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-swqct -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-dw4ql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-dw4ql -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-kpxtp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-kpxtp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-swqct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 kubectl -- exec busybox-7b57f96db7-swqct -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 node add --alsologtostderr -v 5: (34.649304662s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
E1018 12:41:02.494795  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.501124  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.512474  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.533834  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.575277  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.656836  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:02.819092  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5: (1.045476715s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-877597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1018 12:41:03.141016  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:41:03.782559  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.123646627s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --output json --alsologtostderr -v 5
E1018 12:41:05.064920  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 status --output json --alsologtostderr -v 5: (1.160775486s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp testdata/cp-test.txt ha-877597:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3075524325/001/cp-test_ha-877597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597:/home/docker/cp-test.txt ha-877597-m02:/home/docker/cp-test_ha-877597_ha-877597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test_ha-877597_ha-877597-m02.txt"
E1018 12:41:07.627012  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597:/home/docker/cp-test.txt ha-877597-m03:/home/docker/cp-test_ha-877597_ha-877597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test_ha-877597_ha-877597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597:/home/docker/cp-test.txt ha-877597-m04:/home/docker/cp-test_ha-877597_ha-877597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test_ha-877597_ha-877597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp testdata/cp-test.txt ha-877597-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3075524325/001/cp-test_ha-877597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m02:/home/docker/cp-test.txt ha-877597:/home/docker/cp-test_ha-877597-m02_ha-877597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test_ha-877597-m02_ha-877597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m02:/home/docker/cp-test.txt ha-877597-m03:/home/docker/cp-test_ha-877597-m02_ha-877597-m03.txt
E1018 12:41:12.748848  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test_ha-877597-m02_ha-877597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m02:/home/docker/cp-test.txt ha-877597-m04:/home/docker/cp-test_ha-877597-m02_ha-877597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test_ha-877597-m02_ha-877597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp testdata/cp-test.txt ha-877597-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3075524325/001/cp-test_ha-877597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m03:/home/docker/cp-test.txt ha-877597:/home/docker/cp-test_ha-877597-m03_ha-877597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test_ha-877597-m03_ha-877597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m03:/home/docker/cp-test.txt ha-877597-m02:/home/docker/cp-test_ha-877597-m03_ha-877597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test_ha-877597-m03_ha-877597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m03:/home/docker/cp-test.txt ha-877597-m04:/home/docker/cp-test_ha-877597-m03_ha-877597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test_ha-877597-m03_ha-877597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp testdata/cp-test.txt ha-877597-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3075524325/001/cp-test_ha-877597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m04:/home/docker/cp-test.txt ha-877597:/home/docker/cp-test_ha-877597-m04_ha-877597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597 "sudo cat /home/docker/cp-test_ha-877597-m04_ha-877597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m04:/home/docker/cp-test.txt ha-877597-m02:/home/docker/cp-test_ha-877597-m04_ha-877597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test.txt"
E1018 12:41:22.991208  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m02 "sudo cat /home/docker/cp-test_ha-877597-m04_ha-877597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 cp ha-877597-m04:/home/docker/cp-test.txt ha-877597-m03:/home/docker/cp-test_ha-877597-m04_ha-877597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 ssh -n ha-877597-m03 "sudo cat /home/docker/cp-test_ha-877597-m04_ha-877597-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 node stop m02 --alsologtostderr -v 5: (11.378266705s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5: exit status 7 (821.30822ms)

                                                
                                                
-- stdout --
	ha-877597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-877597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-877597-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-877597-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:41:36.041819  920275 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:41:36.042027  920275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:41:36.042056  920275 out.go:374] Setting ErrFile to fd 2...
	I1018 12:41:36.042074  920275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:41:36.042360  920275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:41:36.042587  920275 out.go:368] Setting JSON to false
	I1018 12:41:36.042652  920275 mustload.go:65] Loading cluster: ha-877597
	I1018 12:41:36.042733  920275 notify.go:220] Checking for updates...
	I1018 12:41:36.043131  920275 config.go:182] Loaded profile config "ha-877597": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:41:36.043165  920275 status.go:174] checking status of ha-877597 ...
	I1018 12:41:36.043743  920275 cli_runner.go:164] Run: docker container inspect ha-877597 --format={{.State.Status}}
	I1018 12:41:36.065316  920275 status.go:371] ha-877597 host status = "Running" (err=<nil>)
	I1018 12:41:36.065337  920275 host.go:66] Checking if "ha-877597" exists ...
	I1018 12:41:36.065645  920275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-877597
	I1018 12:41:36.093455  920275 host.go:66] Checking if "ha-877597" exists ...
	I1018 12:41:36.093954  920275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:36.094002  920275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-877597
	I1018 12:41:36.116849  920275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/ha-877597/id_rsa Username:docker}
	I1018 12:41:36.223046  920275 ssh_runner.go:195] Run: systemctl --version
	I1018 12:41:36.230874  920275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:36.249697  920275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:41:36.326578  920275 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 12:41:36.315096935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:41:36.327847  920275 kubeconfig.go:125] found "ha-877597" server: "https://192.168.49.254:8443"
	I1018 12:41:36.327908  920275 api_server.go:166] Checking apiserver status ...
	I1018 12:41:36.328000  920275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:41:36.345139  920275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2241/cgroup
	I1018 12:41:36.354979  920275 api_server.go:182] apiserver freezer: "5:freezer:/docker/d3d32873e40599cfdc8015744d357c832351b0ff26de1f08f543fb6f504aecf4/kubepods/burstable/poded1aaa46324e67037dcb1a51ababa618/f87f332bc5d1c88263354c7396fda00181cd7037c0c225ea578fc31da4eee427"
	I1018 12:41:36.355054  920275 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3d32873e40599cfdc8015744d357c832351b0ff26de1f08f543fb6f504aecf4/kubepods/burstable/poded1aaa46324e67037dcb1a51ababa618/f87f332bc5d1c88263354c7396fda00181cd7037c0c225ea578fc31da4eee427/freezer.state
	I1018 12:41:36.363551  920275 api_server.go:204] freezer state: "THAWED"
	I1018 12:41:36.363579  920275 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:41:36.371819  920275 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:41:36.371850  920275 status.go:463] ha-877597 apiserver status = Running (err=<nil>)
	I1018 12:41:36.371863  920275 status.go:176] ha-877597 status: &{Name:ha-877597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:36.371907  920275 status.go:174] checking status of ha-877597-m02 ...
	I1018 12:41:36.372231  920275 cli_runner.go:164] Run: docker container inspect ha-877597-m02 --format={{.State.Status}}
	I1018 12:41:36.389559  920275 status.go:371] ha-877597-m02 host status = "Stopped" (err=<nil>)
	I1018 12:41:36.389581  920275 status.go:384] host is not running, skipping remaining checks
	I1018 12:41:36.389587  920275 status.go:176] ha-877597-m02 status: &{Name:ha-877597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:36.389607  920275 status.go:174] checking status of ha-877597-m03 ...
	I1018 12:41:36.389942  920275 cli_runner.go:164] Run: docker container inspect ha-877597-m03 --format={{.State.Status}}
	I1018 12:41:36.409480  920275 status.go:371] ha-877597-m03 host status = "Running" (err=<nil>)
	I1018 12:41:36.409504  920275 host.go:66] Checking if "ha-877597-m03" exists ...
	I1018 12:41:36.409803  920275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-877597-m03
	I1018 12:41:36.430113  920275 host.go:66] Checking if "ha-877597-m03" exists ...
	I1018 12:41:36.430436  920275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:36.430481  920275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-877597-m03
	I1018 12:41:36.454140  920275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/ha-877597-m03/id_rsa Username:docker}
	I1018 12:41:36.554556  920275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:36.568041  920275 kubeconfig.go:125] found "ha-877597" server: "https://192.168.49.254:8443"
	I1018 12:41:36.568073  920275 api_server.go:166] Checking apiserver status ...
	I1018 12:41:36.568152  920275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:41:36.580694  920275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2094/cgroup
	I1018 12:41:36.594737  920275 api_server.go:182] apiserver freezer: "5:freezer:/docker/b67694337bca20eed88d6021300432d9ed0d5c97b45dcdc840c0cb9ef66f72f7/kubepods/burstable/pod7981f03545c031c753b3a5e475e3ae4a/be6f8221e55aef2b6ee3d7f2965745b99c4a4fe66a14e34cb5a6188476f2a6f5"
	I1018 12:41:36.594815  920275 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b67694337bca20eed88d6021300432d9ed0d5c97b45dcdc840c0cb9ef66f72f7/kubepods/burstable/pod7981f03545c031c753b3a5e475e3ae4a/be6f8221e55aef2b6ee3d7f2965745b99c4a4fe66a14e34cb5a6188476f2a6f5/freezer.state
	I1018 12:41:36.607489  920275 api_server.go:204] freezer state: "THAWED"
	I1018 12:41:36.607520  920275 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:41:36.615692  920275 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:41:36.615721  920275 status.go:463] ha-877597-m03 apiserver status = Running (err=<nil>)
	I1018 12:41:36.615730  920275 status.go:176] ha-877597-m03 status: &{Name:ha-877597-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:36.615746  920275 status.go:174] checking status of ha-877597-m04 ...
	I1018 12:41:36.616067  920275 cli_runner.go:164] Run: docker container inspect ha-877597-m04 --format={{.State.Status}}
	I1018 12:41:36.633623  920275 status.go:371] ha-877597-m04 host status = "Running" (err=<nil>)
	I1018 12:41:36.633650  920275 host.go:66] Checking if "ha-877597-m04" exists ...
	I1018 12:41:36.633962  920275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-877597-m04
	I1018 12:41:36.652448  920275 host.go:66] Checking if "ha-877597-m04" exists ...
	I1018 12:41:36.652851  920275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:36.652903  920275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-877597-m04
	I1018 12:41:36.671464  920275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/ha-877597-m04/id_rsa Username:docker}
	I1018 12:41:36.774310  920275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:36.791447  920275 status.go:176] ha-877597-m04 status: &{Name:ha-877597-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node start m02 --alsologtostderr -v 5
E1018 12:41:43.472493  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 node start m02 --alsologtostderr -v 5: (45.28343375s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5: (1.238086202s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (46.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1018 12:42:24.434278  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084929111s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (245.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 stop --alsologtostderr -v 5
E1018 12:42:52.904970  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 stop --alsologtostderr -v 5: (35.568397579s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 start --wait true --alsologtostderr -v 5
E1018 12:43:46.358151  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:46:02.495028  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:46:30.200218  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 start --wait true --alsologtostderr -v 5: (3m29.501084428s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (245.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 node delete m03 --alsologtostderr -v 5: (10.548561425s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 stop --alsologtostderr -v 5: (33.531877096s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5: exit status 7 (113.134044ms)

                                                
                                                
-- stdout --
	ha-877597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-877597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-877597-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:47:16.495842  948871 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:47:16.496012  948871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:47:16.496042  948871 out.go:374] Setting ErrFile to fd 2...
	I1018 12:47:16.496063  948871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:47:16.496340  948871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:47:16.496563  948871 out.go:368] Setting JSON to false
	I1018 12:47:16.496638  948871 mustload.go:65] Loading cluster: ha-877597
	I1018 12:47:16.496710  948871 notify.go:220] Checking for updates...
	I1018 12:47:16.497772  948871 config.go:182] Loaded profile config "ha-877597": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:47:16.497825  948871 status.go:174] checking status of ha-877597 ...
	I1018 12:47:16.498472  948871 cli_runner.go:164] Run: docker container inspect ha-877597 --format={{.State.Status}}
	I1018 12:47:16.521070  948871 status.go:371] ha-877597 host status = "Stopped" (err=<nil>)
	I1018 12:47:16.521094  948871 status.go:384] host is not running, skipping remaining checks
	I1018 12:47:16.521101  948871 status.go:176] ha-877597 status: &{Name:ha-877597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:47:16.521131  948871 status.go:174] checking status of ha-877597-m02 ...
	I1018 12:47:16.521486  948871 cli_runner.go:164] Run: docker container inspect ha-877597-m02 --format={{.State.Status}}
	I1018 12:47:16.538596  948871 status.go:371] ha-877597-m02 host status = "Stopped" (err=<nil>)
	I1018 12:47:16.538620  948871 status.go:384] host is not running, skipping remaining checks
	I1018 12:47:16.538627  948871 status.go:176] ha-877597-m02 status: &{Name:ha-877597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:47:16.538645  948871 status.go:174] checking status of ha-877597-m04 ...
	I1018 12:47:16.538954  948871 cli_runner.go:164] Run: docker container inspect ha-877597-m04 --format={{.State.Status}}
	I1018 12:47:16.559808  948871 status.go:371] ha-877597-m04 host status = "Stopped" (err=<nil>)
	I1018 12:47:16.559843  948871 status.go:384] host is not running, skipping remaining checks
	I1018 12:47:16.559853  948871 status.go:176] ha-877597-m04 status: &{Name:ha-877597-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1018 12:47:52.904463  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m55.720641142s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (68.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 node add --control-plane --alsologtostderr -v 5: (1m6.869349559s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-877597 status --alsologtostderr -v 5: (1.143627597s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (68.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048501092s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-237249 --driver=docker  --container-runtime=docker
E1018 12:51:02.502464  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-237249 --driver=docker  --container-runtime=docker: (32.205642548s)
--- PASS: TestImageBuild/serial/Setup (32.21s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-237249
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-237249: (1.680690672s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-237249
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-237249
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-237249
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-976616 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-976616 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m18.88667666s)
--- PASS: TestJSONOutput/start/Command (78.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-976616 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-976616 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-976616 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-976616 --output=json --user=testUser: (6.006762167s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-478310 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-478310 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.063962ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ab6b55d2-f5a4-4d90-95ed-d87bb99cd076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-478310] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f962c3a3-7c46-4902-84b8-696ed29b243e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"d2cafe56-9a74-44b3-b97a-ff3c4c450303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4096fbd8-de32-48e3-97be-095039b5b9aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig"}}
	{"specversion":"1.0","id":"df1fa2b8-c96d-4192-b177-4516496c034f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube"}}
	{"specversion":"1.0","id":"ece4fb3d-0b7f-47ff-8b3a-22d9536509d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5d5b5c0d-5210-4bc1-ace6-7c23042fb95f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41e2f126-fc48-448d-b578-4bf075035b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-478310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-478310
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-492739 --network=
E1018 12:52:52.905147  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-492739 --network=: (34.785335378s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-492739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-492739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-492739: (2.245935045s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-934939 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-934939 --network=bridge: (34.671347101s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-934939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-934939
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-934939: (2.050625831s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.75s)

                                                
                                    
x
+
TestKicExistingNetwork (36.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 12:53:52.546799  837622 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 12:53:52.562575  837622 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 12:53:52.562657  837622 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 12:53:52.562675  837622 cli_runner.go:164] Run: docker network inspect existing-network
W1018 12:53:52.578545  837622 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 12:53:52.578572  837622 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 12:53:52.578589  837622 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 12:53:52.578688  837622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 12:53:52.595532  837622 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0579dfbcb6dc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:63:f7:bc:36:bc} reservation:<nil>}
I1018 12:53:52.595836  837622 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017ff110}
I1018 12:53:52.595859  837622 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 12:53:52.595907  837622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 12:53:52.648656  837622 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-682325 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-682325 --network=existing-network: (34.499258124s)
helpers_test.go:175: Cleaning up "existing-network-682325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-682325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-682325: (2.0270416s)
I1018 12:54:29.191391  837622 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.66s)

                                                
                                    
x
+
TestKicCustomSubnet (35.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-784512 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-784512 --subnet=192.168.60.0/24: (33.142583629s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-784512 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-784512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-784512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-784512: (2.213879799s)
--- PASS: TestKicCustomSubnet (35.38s)

                                                
                                    
x
+
TestKicStaticIP (37.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-891048 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-891048 --static-ip=192.168.200.200: (35.409599156s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-891048 ip
helpers_test.go:175: Cleaning up "static-ip-891048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-891048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-891048: (2.261137849s)
--- PASS: TestKicStaticIP (37.83s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-157082 --driver=docker  --container-runtime=docker
E1018 12:55:55.973772  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:56:02.497383  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-157082 --driver=docker  --container-runtime=docker: (32.768046746s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-159962 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-159962 --driver=docker  --container-runtime=docker: (35.573715354s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-157082
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-159962
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-159962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-159962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-159962: (2.18028912s)
helpers_test.go:175: Cleaning up "first-157082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-157082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-157082: (2.219341167s)
--- PASS: TestMinikubeProfile (74.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-985254 --memory=3072 --mount-string /tmp/TestMountStartserial2670613367/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-985254 --memory=3072 --mount-string /tmp/TestMountStartserial2670613367/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.61717892s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-985254 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-987158 --memory=3072 --mount-string /tmp/TestMountStartserial2670613367/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-987158 --memory=3072 --mount-string /tmp/TestMountStartserial2670613367/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.726788166s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-987158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-985254 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-985254 --alsologtostderr -v=5: (1.569019709s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-987158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-987158
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-987158: (1.286065082s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-987158
E1018 12:57:25.562450  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-987158: (7.861415115s)
--- PASS: TestMountStart/serial/RestartStopped (8.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-987158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-370719 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1018 12:57:52.904950  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-370719 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.074222808s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-370719 -- rollout status deployment/busybox: (3.622764923s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-rz84t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-vwgrg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-rz84t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-vwgrg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-rz84t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-vwgrg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-rz84t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-rz84t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-vwgrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-370719 -- exec busybox-7b57f96db7-vwgrg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-370719 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-370719 -v=5 --alsologtostderr: (34.084825895s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.78s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-370719 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp testdata/cp-test.txt multinode-370719:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56842881/001/cp-test_multinode-370719.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719:/home/docker/cp-test.txt multinode-370719-m02:/home/docker/cp-test_multinode-370719_multinode-370719-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test_multinode-370719_multinode-370719-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719:/home/docker/cp-test.txt multinode-370719-m03:/home/docker/cp-test_multinode-370719_multinode-370719-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test_multinode-370719_multinode-370719-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp testdata/cp-test.txt multinode-370719-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56842881/001/cp-test_multinode-370719-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m02:/home/docker/cp-test.txt multinode-370719:/home/docker/cp-test_multinode-370719-m02_multinode-370719.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test_multinode-370719-m02_multinode-370719.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m02:/home/docker/cp-test.txt multinode-370719-m03:/home/docker/cp-test_multinode-370719-m02_multinode-370719-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test_multinode-370719-m02_multinode-370719-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp testdata/cp-test.txt multinode-370719-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56842881/001/cp-test_multinode-370719-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m03:/home/docker/cp-test.txt multinode-370719:/home/docker/cp-test_multinode-370719-m03_multinode-370719.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719 "sudo cat /home/docker/cp-test_multinode-370719-m03_multinode-370719.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 cp multinode-370719-m03:/home/docker/cp-test.txt multinode-370719-m02:/home/docker/cp-test_multinode-370719-m03_multinode-370719-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 ssh -n multinode-370719-m02 "sudo cat /home/docker/cp-test_multinode-370719-m03_multinode-370719-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-370719 node stop m03: (1.321036736s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-370719 status: exit status 7 (547.512637ms)

                                                
                                                
-- stdout --
	multinode-370719
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370719-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370719-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr: exit status 7 (547.420041ms)

                                                
                                                
-- stdout --
	multinode-370719
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370719-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370719-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:59:57.962331 1022208 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:59:57.962538 1022208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:59:57.962566 1022208 out.go:374] Setting ErrFile to fd 2...
	I1018 12:59:57.962586 1022208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:59:57.962977 1022208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 12:59:57.963240 1022208 out.go:368] Setting JSON to false
	I1018 12:59:57.963295 1022208 mustload.go:65] Loading cluster: multinode-370719
	I1018 12:59:57.963565 1022208 notify.go:220] Checking for updates...
	I1018 12:59:57.964894 1022208 config.go:182] Loaded profile config "multinode-370719": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 12:59:57.964941 1022208 status.go:174] checking status of multinode-370719 ...
	I1018 12:59:57.965536 1022208 cli_runner.go:164] Run: docker container inspect multinode-370719 --format={{.State.Status}}
	I1018 12:59:57.996513 1022208 status.go:371] multinode-370719 host status = "Running" (err=<nil>)
	I1018 12:59:57.996541 1022208 host.go:66] Checking if "multinode-370719" exists ...
	I1018 12:59:57.996824 1022208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-370719
	I1018 12:59:58.025853 1022208 host.go:66] Checking if "multinode-370719" exists ...
	I1018 12:59:58.026181 1022208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:59:58.026230 1022208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-370719
	I1018 12:59:58.047244 1022208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34019 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/multinode-370719/id_rsa Username:docker}
	I1018 12:59:58.150774 1022208 ssh_runner.go:195] Run: systemctl --version
	I1018 12:59:58.157160 1022208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:59:58.170799 1022208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:59:58.226870 1022208 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 12:59:58.217429343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:59:58.227414 1022208 kubeconfig.go:125] found "multinode-370719" server: "https://192.168.67.2:8443"
	I1018 12:59:58.227455 1022208 api_server.go:166] Checking apiserver status ...
	I1018 12:59:58.227497 1022208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:59:58.243370 1022208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2116/cgroup
	I1018 12:59:58.251879 1022208 api_server.go:182] apiserver freezer: "5:freezer:/docker/27d32e131e4bf8f45bdccccadfc78dbaf21fbfeeaa2d8eb2e9fdde2d6204768f/kubepods/burstable/podca3b6265ff9d486f7cd372112322b92e/6d1068c136754780a42f82744236415bf7fcd70f3021fef8a4b84d10a82daee3"
	I1018 12:59:58.251949 1022208 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27d32e131e4bf8f45bdccccadfc78dbaf21fbfeeaa2d8eb2e9fdde2d6204768f/kubepods/burstable/podca3b6265ff9d486f7cd372112322b92e/6d1068c136754780a42f82744236415bf7fcd70f3021fef8a4b84d10a82daee3/freezer.state
	I1018 12:59:58.259755 1022208 api_server.go:204] freezer state: "THAWED"
	I1018 12:59:58.259789 1022208 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 12:59:58.268266 1022208 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 12:59:58.268300 1022208 status.go:463] multinode-370719 apiserver status = Running (err=<nil>)
	I1018 12:59:58.268313 1022208 status.go:176] multinode-370719 status: &{Name:multinode-370719 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:59:58.268332 1022208 status.go:174] checking status of multinode-370719-m02 ...
	I1018 12:59:58.268673 1022208 cli_runner.go:164] Run: docker container inspect multinode-370719-m02 --format={{.State.Status}}
	I1018 12:59:58.285635 1022208 status.go:371] multinode-370719-m02 host status = "Running" (err=<nil>)
	I1018 12:59:58.285663 1022208 host.go:66] Checking if "multinode-370719-m02" exists ...
	I1018 12:59:58.285948 1022208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-370719-m02
	I1018 12:59:58.304107 1022208 host.go:66] Checking if "multinode-370719-m02" exists ...
	I1018 12:59:58.304460 1022208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:59:58.304505 1022208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-370719-m02
	I1018 12:59:58.322519 1022208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/21647-835760/.minikube/machines/multinode-370719-m02/id_rsa Username:docker}
	I1018 12:59:58.422170 1022208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:59:58.435163 1022208 status.go:176] multinode-370719-m02 status: &{Name:multinode-370719-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:59:58.435202 1022208 status.go:174] checking status of multinode-370719-m03 ...
	I1018 12:59:58.435520 1022208 cli_runner.go:164] Run: docker container inspect multinode-370719-m03 --format={{.State.Status}}
	I1018 12:59:58.453163 1022208 status.go:371] multinode-370719-m03 host status = "Stopped" (err=<nil>)
	I1018 12:59:58.453226 1022208 status.go:384] host is not running, skipping remaining checks
	I1018 12:59:58.453234 1022208 status.go:176] multinode-370719-m03 status: &{Name:multinode-370719-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-370719 node start m03 -v=5 --alsologtostderr: (8.996613526s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-370719
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-370719
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-370719: (23.037995445s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-370719 --wait=true -v=5 --alsologtostderr
E1018 13:01:02.494721  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-370719 --wait=true -v=5 --alsologtostderr: (51.811006627s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-370719
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-370719 node delete m03: (5.093899484s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-370719 stop: (21.948684978s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-370719 status: exit status 7 (121.430136ms)

                                                
                                                
-- stdout --
	multinode-370719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-370719-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr: exit status 7 (107.499369ms)

                                                
                                                
-- stdout --
	multinode-370719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-370719-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:01:51.161907 1035923 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:01:51.162093 1035923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:01:51.162103 1035923 out.go:374] Setting ErrFile to fd 2...
	I1018 13:01:51.162108 1035923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:01:51.162388 1035923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-835760/.minikube/bin
	I1018 13:01:51.162606 1035923 out.go:368] Setting JSON to false
	I1018 13:01:51.162657 1035923 mustload.go:65] Loading cluster: multinode-370719
	I1018 13:01:51.162801 1035923 notify.go:220] Checking for updates...
	I1018 13:01:51.163061 1035923 config.go:182] Loaded profile config "multinode-370719": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1018 13:01:51.163081 1035923 status.go:174] checking status of multinode-370719 ...
	I1018 13:01:51.163904 1035923 cli_runner.go:164] Run: docker container inspect multinode-370719 --format={{.State.Status}}
	I1018 13:01:51.183973 1035923 status.go:371] multinode-370719 host status = "Stopped" (err=<nil>)
	I1018 13:01:51.184000 1035923 status.go:384] host is not running, skipping remaining checks
	I1018 13:01:51.184012 1035923 status.go:176] multinode-370719 status: &{Name:multinode-370719 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 13:01:51.184049 1035923 status.go:174] checking status of multinode-370719-m02 ...
	I1018 13:01:51.184360 1035923 cli_runner.go:164] Run: docker container inspect multinode-370719-m02 --format={{.State.Status}}
	I1018 13:01:51.213213 1035923 status.go:371] multinode-370719-m02 host status = "Stopped" (err=<nil>)
	I1018 13:01:51.213237 1035923 status.go:384] host is not running, skipping remaining checks
	I1018 13:01:51.213244 1035923 status.go:176] multinode-370719-m02 status: &{Name:multinode-370719-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-370719 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-370719 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (51.364127076s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-370719 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-370719
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-370719-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-370719-m02 --driver=docker  --container-runtime=docker: exit status 14 (89.506391ms)

                                                
                                                
-- stdout --
	* [multinode-370719-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-370719-m02' is duplicated with machine name 'multinode-370719-m02' in profile 'multinode-370719'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-370719-m03 --driver=docker  --container-runtime=docker
E1018 13:02:52.905099  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-370719-m03 --driver=docker  --container-runtime=docker: (33.366662709s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-370719
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-370719: exit status 80 (329.040881ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-370719 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-370719-m03 already exists in multinode-370719-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-370719-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-370719-m03: (2.289825681s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.13s)

                                                
                                    
x
+
TestPreload (119.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-467366 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-467366 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (54.582034015s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-467366 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-467366 image pull gcr.io/k8s-minikube/busybox: (2.330887637s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-467366
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-467366: (5.938856095s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-467366 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-467366 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (53.654950712s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-467366 image list
helpers_test.go:175: Cleaning up "test-preload-467366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-467366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-467366: (2.335009636s)
--- PASS: TestPreload (119.08s)

                                                
                                    
x
+
TestScheduledStopUnix (111.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-919883 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-919883 --memory=3072 --driver=docker  --container-runtime=docker: (37.663717883s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-919883 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-919883 -n scheduled-stop-919883
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-919883 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 13:06:00.982625  837622 retry.go:31] will retry after 68.387µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.983771  837622 retry.go:31] will retry after 147.137µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.984907  837622 retry.go:31] will retry after 257.461µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.986028  837622 retry.go:31] will retry after 311.204µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.987136  837622 retry.go:31] will retry after 569.996µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.988243  837622 retry.go:31] will retry after 870.085µs: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.989318  837622 retry.go:31] will retry after 1.660351ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.991500  837622 retry.go:31] will retry after 1.825951ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.993678  837622 retry.go:31] will retry after 1.41205ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:00.995830  837622 retry.go:31] will retry after 3.108491ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.000038  837622 retry.go:31] will retry after 2.890229ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.003266  837622 retry.go:31] will retry after 9.138994ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.013498  837622 retry.go:31] will retry after 16.458998ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.030758  837622 retry.go:31] will retry after 13.558183ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.044996  837622 retry.go:31] will retry after 28.782917ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
I1018 13:06:01.074345  837622 retry.go:31] will retry after 63.040002ms: open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/scheduled-stop-919883/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-919883 --cancel-scheduled
E1018 13:06:02.494113  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-919883 -n scheduled-stop-919883
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-919883
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-919883 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-919883
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-919883: exit status 7 (74.177893ms)

                                                
                                                
-- stdout --
	scheduled-stop-919883
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-919883 -n scheduled-stop-919883
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-919883 -n scheduled-stop-919883: exit status 7 (68.941612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-919883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-919883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-919883: (1.64376296s)
--- PASS: TestScheduledStopUnix (111.04s)

                                                
                                    
x
+
TestSkaffold (145.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3095873220 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-559745 --memory=3072 --driver=docker  --container-runtime=docker
E1018 13:07:52.905009  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-559745 --memory=3072 --driver=docker  --container-runtime=docker: (38.600396586s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3095873220 run --minikube-profile skaffold-559745 --kube-context skaffold-559745 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3095873220 run --minikube-profile skaffold-559745 --kube-context skaffold-559745 --status-check=true --port-forward=false --interactive=false: (1m31.059330875s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-dd6f9bd4f-jgs8g" [d1ef32be-953a-4476-a4a5-b8bce07e7f5d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004125728s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-96686c979-m9m2q" [7867614e-4289-48c0-9017-0c0cc7533bdc] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003581172s
helpers_test.go:175: Cleaning up "skaffold-559745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-559745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-559745: (3.090301289s)
--- PASS: TestSkaffold (145.22s)

                                                
                                    
x
+
TestInsufficientStorage (14.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-881466 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-881466 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.684098925s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"06364780-1693-4c4b-bb2c-aaaca82f1374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-881466] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42999ebb-6506-461e-9807-7a66e14cbd0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"184085ba-8e79-4434-b066-bc996fa1d5e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"517c41b3-c12d-4d09-93d6-a1943fc6bd7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig"}}
	{"specversion":"1.0","id":"35789b93-fcfd-4e94-94e4-d94ebd6766a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube"}}
	{"specversion":"1.0","id":"05d63b4e-5dfb-4b04-91c2-74b8e446a5af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d4294c32-1155-452f-be24-dc5b5c6d7dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ea57f9e-e8e4-4e01-b964-99aeaa6dfcc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"80db0c7c-cdf0-4b94-9d23-f2bc96e5afc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"123a1b1f-c525-44b3-a674-c1b7515520f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b05fc030-7958-4921-b7de-fc1e0326c4f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c799852b-2022-460d-ab7e-33143de4cef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-881466\" primary control-plane node in \"insufficient-storage-881466\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a77ad012-48c4-4755-b55a-7c4531888e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6a9e4dc-44d7-4a28-9142-fd3286567111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d397dc0-90f3-403c-8f51-97136c92eb7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-881466 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-881466 --output=json --layout=cluster: exit status 7 (290.312406ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-881466","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-881466","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:09:50.927823 1069633 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-881466" does not appear in /home/jenkins/minikube-integration/21647-835760/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-881466 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-881466 --output=json --layout=cluster: exit status 7 (301.636389ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-881466","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-881466","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:09:51.229702 1069699 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-881466" does not appear in /home/jenkins/minikube-integration/21647-835760/kubeconfig
	E1018 13:09:51.239695 1069699 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/insufficient-storage-881466/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-881466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-881466
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-881466: (1.773840929s)
--- PASS: TestInsufficientStorage (14.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3103083906 start -p running-upgrade-400573 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1018 13:12:35.975697  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:12:52.904453  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3103083906 start -p running-upgrade-400573 --memory=3072 --vm-driver=docker  --container-runtime=docker: (55.215971481s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-400573 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-400573 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.760206651s)
helpers_test.go:175: Cleaning up "running-upgrade-400573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-400573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-400573: (2.152490595s)
--- PASS: TestRunningBinaryUpgrade (89.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (224.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1018 13:15:46.786908  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:16:02.494341  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.994114709s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-528118
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-528118: (11.598260912s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-528118 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-528118 status --format={{.Host}}: exit status 7 (103.034315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m58.16685307s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-528118 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (89.532003ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-528118] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-528118
	    minikube start -p kubernetes-upgrade-528118 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5281182 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-528118 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-528118 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.33213916s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-528118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-528118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-528118: (2.633296852s)
--- PASS: TestKubernetesUpgrade (224.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (86.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3253709237 start -p missing-upgrade-364328 --memory=3072 --driver=docker  --container-runtime=docker
E1018 13:14:05.563758  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:24.848258  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:24.854758  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:24.866199  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:24.887572  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:24.929070  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:25.010483  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:25.171739  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:25.493513  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:26.135709  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:27.417028  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3253709237 start -p missing-upgrade-364328 --memory=3072 --driver=docker  --container-runtime=docker: (30.530353785s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-364328
E1018 13:14:29.979186  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:14:35.101418  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-364328: (10.445995046s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-364328
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-364328 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1018 13:14:45.342822  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:15:05.824994  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-364328 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.905479185s)
helpers_test.go:175: Cleaning up "missing-upgrade-364328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-364328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-364328: (2.309136934s)
--- PASS: TestMissingContainerUpgrade (86.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2440071536 start -p stopped-upgrade-542583 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2440071536 start -p stopped-upgrade-542583 --memory=3072 --vm-driver=docker  --container-runtime=docker: (42.390581607s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2440071536 -p stopped-upgrade-542583 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2440071536 -p stopped-upgrade-542583 stop: (10.919903525s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-542583 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1018 13:17:08.709367  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-542583 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.42535114s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-542583
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-542583: (1.10296415s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (80.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-974819 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1018 13:17:52.904767  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-974819 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m20.652013949s)
--- PASS: TestPause/serial/Start (80.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-974819 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-974819 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.067955529s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (103.272891ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-693716] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-835760/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-835760/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693716 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1018 13:19:24.847370  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693716 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.937379875s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-693716 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.46s)

                                                
                                    
x
+
TestPause/serial/Pause (1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-974819 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-974819 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-974819 --output=json --layout=cluster: exit status 2 (563.917134ms)

                                                
                                                
-- stdout --
	{"Name":"pause-974819","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-974819","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-974819 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.23s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-974819 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-974819 --alsologtostderr -v=5: (1.230182138s)
--- PASS: TestPause/serial/PauseAgain (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (17.16280462s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-693716 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-693716 status -o json: exit status 2 (380.846648ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-693716","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-693716
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-693716: (1.930118966s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.47s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-974819 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-974819 --alsologtostderr -v=5: (2.991758516s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-974819
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-974819: exit status 1 (29.482237ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-974819: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1018 13:19:52.551533  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m23.914918903s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (12.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693716 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (12.387521185s)
--- PASS: TestNoKubernetes/serial/Start (12.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-693716 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-693716 "sudo systemctl is-active --quiet service kubelet": exit status 1 (377.869613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-693716
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-693716: (1.358145471s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693716 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693716 --driver=docker  --container-runtime=docker: (8.812939247s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-693716 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-693716 "sudo systemctl is-active --quiet service kubelet": exit status 1 (434.75551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1018 13:21:02.494262  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.166142011s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-857514 "pgrep -a kubelet"
I1018 13:21:15.365527  837622 config.go:182] Loaded profile config "auto-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zxxct" [8230bc6d-d9af-4db9-976f-70339299c0bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zxxct" [8230bc6d-d9af-4db9-976f-70339299c0bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004035077s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mfxcl" [a08184c2-dbe4-4d03-94ee-eaa1df3ae5bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003620513s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-857514 "pgrep -a kubelet"
I1018 13:21:48.516662  837622 config.go:182] Loaded profile config "calico-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jq8xt" [0dc38616-396f-4ec0-a275-1635adeb7c60] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jq8xt" [0dc38616-396f-4ec0-a275-1635adeb7c60] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003449622s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.472819343s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (79.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m19.381224542s)
--- PASS: TestNetworkPlugins/group/false/Start (79.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-857514 "pgrep -a kubelet"
I1018 13:22:50.890235  837622 config.go:182] Loaded profile config "custom-flannel-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-857514 replace --force -f testdata/netcat-deployment.yaml
I1018 13:22:51.229134  837622 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hgt26" [34d90698-30da-4c71-8918-2dad6ae4f7ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 13:22:52.905133  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hgt26" [34d90698-30da-4c71-8918-2dad6ae4f7ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003831551s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.354072525s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-857514 "pgrep -a kubelet"
I1018 13:23:47.716892  837622 config.go:182] Loaded profile config "false-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4flnp" [f3e35702-eb04-4f58-93e2-23b0111f73d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4flnp" [f3e35702-eb04-4f58-93e2-23b0111f73d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004205907s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m3.593648064s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xwkbv" [9dbf0efb-7895-4a34-b1fb-276b9811f201] Running
E1018 13:24:24.847476  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003154896s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-857514 "pgrep -a kubelet"
I1018 13:24:30.974342  837622 config.go:182] Loaded profile config "kindnet-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h99wf" [3cff6faa-2ac2-4b61-840e-8d9cec7442eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h99wf" [3cff6faa-2ac2-4b61-840e-8d9cec7442eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004794151s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (50.477906628s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zl27n" [2997158a-4a9a-465b-b13c-fcd31410fce8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003288524s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-857514 "pgrep -a kubelet"
I1018 13:25:31.781224  837622 config.go:182] Loaded profile config "flannel-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-857514 replace --force -f testdata/netcat-deployment.yaml
I1018 13:25:32.091561  837622 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jvznm" [63ce6237-e191-4049-be5f-67b688c49ff1] Pending
helpers_test.go:352: "netcat-cd4db9dbf-jvznm" [63ce6237-e191-4049-be5f-67b688c49ff1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jvznm" [63ce6237-e191-4049-be5f-67b688c49ff1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.002990308s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-857514 "pgrep -a kubelet"
I1018 13:25:59.629804  837622 config.go:182] Loaded profile config "enable-default-cni-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5bdgn" [716b3488-6437-4f85-b994-04c4701118c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 13:26:02.494190  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5bdgn" [716b3488-6437-4f85-b994-04c4701118c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005425158s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (56.34926275s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (80.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1018 13:26:36.207117  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.087892  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.096561  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.110259  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.131591  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.173011  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.254360  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.415795  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:42.737492  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:43.379585  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:44.660878  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:47.223076  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:52.345233  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:56.689640  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:02.587162  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-857514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m20.397045448s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (80.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-857514 "pgrep -a kubelet"
I1018 13:27:05.141808  837622 config.go:182] Loaded profile config "bridge-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-857514 replace --force -f testdata/netcat-deployment.yaml
I1018 13:27:05.489681  837622 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sh8rp" [ff6528e2-d9c7-4829-ad2a-bfbeacb6e8d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sh8rp" [ff6528e2-d9c7-4829-ad2a-bfbeacb6e8d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003826648s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (91.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-418330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1018 13:27:37.651635  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.200644  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.207027  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.218386  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.239761  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.281139  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.362782  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.525024  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:51.846630  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:52.488791  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:52.904430  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:53.771045  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:27:56.332471  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-418330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m31.490092077s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (91.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-857514 "pgrep -a kubelet"
I1018 13:27:56.906891  837622 config.go:182] Loaded profile config "kubenet-857514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-857514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fv67x" [8f620d5f-8600-4b4f-8024-51d17a6395a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 13:28:01.455995  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fv67x" [8f620d5f-8600-4b4f-8024-51d17a6395a6] Running
E1018 13:28:04.030378  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003119394s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-857514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-857514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-628894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:28:48.014535  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.020869  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.032224  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.053574  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.094950  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.176313  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.337722  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:48.659995  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:49.301515  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:50.582799  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:53.144858  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:58.266737  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:59.572985  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-628894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m25.912338409s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-418330 create -f testdata/busybox.yaml
E1018 13:29:08.508503  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e5948269-657a-4920-95cb-7bc88c3a4cb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e5948269-657a-4920-95cb-7bc88c3a4cb0] Running
E1018 13:29:13.142524  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:15.977426  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004184207s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-418330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-418330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-418330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.733548797s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-418330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-418330 --alsologtostderr -v=3
E1018 13:29:24.690504  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.696864  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.708253  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.729655  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.771266  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.847717  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:24.853090  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:25.014686  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:25.336389  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:25.952681  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:25.978098  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:27.260010  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:28.990079  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:29.821349  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-418330 --alsologtostderr -v=3: (11.499595371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-418330 -n old-k8s-version-418330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-418330 -n old-k8s-version-418330: exit status 7 (82.104444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-418330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (30.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-418330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1018 13:29:34.942787  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:45.186106  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-418330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (30.34517295s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-418330 -n old-k8s-version-418330
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (30.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-628894 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d28b7cc2-2a1f-454e-bbaf-4efea685e053] Pending
helpers_test.go:352: "busybox" [d28b7cc2-2a1f-454e-bbaf-4efea685e053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d28b7cc2-2a1f-454e-bbaf-4efea685e053] Running
E1018 13:30:05.668294  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003661996s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-628894 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wxpsl" [fe989ba3-1fea-45b2-ad7e-df7bd1cf9a27] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wxpsl" [fe989ba3-1fea-45b2-ad7e-df7bd1cf9a27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003244983s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-628894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 13:30:09.951852  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-628894 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-628894 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-628894 --alsologtostderr -v=3: (11.475126304s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wxpsl" [fe989ba3-1fea-45b2-ad7e-df7bd1cf9a27] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004152546s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-418330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-418330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-418330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-418330 -n old-k8s-version-418330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-418330 -n old-k8s-version-418330: exit status 2 (328.741517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-418330 -n old-k8s-version-418330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-418330 -n old-k8s-version-418330: exit status 2 (318.829488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-418330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-418330 -n old-k8s-version-418330
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-418330 -n old-k8s-version-418330
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-628894 -n no-preload-628894
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-628894 -n no-preload-628894: exit status 7 (177.214453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-628894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-628894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-628894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (59.819913595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-628894 -n no-preload-628894
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-229658 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:30:27.963959  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:30.525341  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:35.064693  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:35.646701  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:45.567050  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:45.890697  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:46.630211  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:30:47.913139  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.073521  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.079845  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.091199  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.112585  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.154545  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.236225  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.399318  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:00.720701  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:01.362642  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:02.494661  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/functional-918162/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:02.644034  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:05.206162  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:06.372760  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:10.327752  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:15.712287  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:20.569421  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-229658 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m20.121546927s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d9dxd" [071be90d-e8aa-47ae-bc08-01d8ba4ac6ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003101509s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d9dxd" [071be90d-e8aa-47ae-bc08-01d8ba4ac6ae] Running
E1018 13:31:31.873932  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003723738s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-628894 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-628894 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-628894 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-628894 -n no-preload-628894
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-628894 -n no-preload-628894: exit status 2 (344.863601ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-628894 -n no-preload-628894
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-628894 -n no-preload-628894: exit status 2 (357.339025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-628894 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-628894 -n no-preload-628894
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-628894 -n no-preload-628894
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-895134 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:31:41.051622  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:42.088097  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:43.414294  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/auto-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-895134 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m21.156217054s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-229658 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8028eb6d-e54b-4e4f-9bb8-21610223961c] Pending
E1018 13:31:47.334931  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [8028eb6d-e54b-4e4f-9bb8-21610223961c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8028eb6d-e54b-4e4f-9bb8-21610223961c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004068794s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-229658 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-229658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-229658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.263079098s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-229658 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-229658 --alsologtostderr -v=3
E1018 13:32:05.446682  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.453021  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.464372  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.485745  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.527108  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.608459  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:05.770038  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:06.092280  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:06.734318  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:08.015669  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:08.551738  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-229658 --alsologtostderr -v=3: (11.637541336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229658 -n embed-certs-229658
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229658 -n embed-certs-229658: exit status 7 (112.43916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-229658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-229658 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:32:09.794455  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/calico-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:10.577306  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:15.698652  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:22.013870  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:25.939991  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:46.421783  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:51.201308  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:52.905500  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/addons-481299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.234055  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.240488  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.252070  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.273572  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.315018  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.396524  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.558206  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:57.879965  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:58.521961  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:32:59.804227  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-229658 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (55.150523132s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229658 -n embed-certs-229658
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-895134 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e87646a9-1855-40b5-bc08-7e8a72d2dbf0] Pending
E1018 13:33:02.365937  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [e87646a9-1855-40b5-bc08-7e8a72d2dbf0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e87646a9-1855-40b5-bc08-7e8a72d2dbf0] Running
E1018 13:33:07.487420  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:33:09.256320  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003891472s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-895134 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fjvnw" [01181caf-8d8a-484e-8c38-b12712c18ee6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003852587s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-895134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-895134 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fjvnw" [01181caf-8d8a-484e-8c38-b12712c18ee6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004093588s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-229658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-895134 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-895134 --alsologtostderr -v=3: (11.29359751s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-229658 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-229658 --alsologtostderr -v=1
E1018 13:33:17.729003  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229658 -n embed-certs-229658
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229658 -n embed-certs-229658: exit status 2 (346.06648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229658 -n embed-certs-229658
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229658 -n embed-certs-229658: exit status 2 (323.395419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-229658 --alsologtostderr -v=1
E1018 13:33:18.906290  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/custom-flannel-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229658 -n embed-certs-229658
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229658 -n embed-certs-229658
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134: exit status 7 (77.747177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-895134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-895134 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-895134 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m0.481541304s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-979836 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:33:27.383706  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:33:38.210683  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:33:43.935258  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/enable-default-cni-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:33:48.014404  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.556280  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.562642  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.573997  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.595312  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.636655  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.718085  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:08.879692  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:09.201459  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:09.843301  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:11.125051  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-979836 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (50.000176911s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-979836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 13:34:13.686957  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-979836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.209847904s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-979836 --alsologtostderr -v=3
E1018 13:34:15.716153  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/false-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:18.808239  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:19.174054  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kubenet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-979836 --alsologtostderr -v=3: (11.244550249s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5527f" [635db3c9-8ca7-48cd-b09f-ac0ecf93731f] Running
E1018 13:34:24.690174  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/kindnet-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:24.847867  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/skaffold-559745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003246212s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-979836 -n newest-cni-979836
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-979836 -n newest-cni-979836: exit status 7 (83.869929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-979836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-979836 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1018 13:34:29.049653  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-979836 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (21.581421263s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-979836 -n newest-cni-979836
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5527f" [635db3c9-8ca7-48cd-b09f-ac0ecf93731f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004206403s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-895134 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-895134 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-895134 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-895134 --alsologtostderr -v=1: (1.243872963s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134: exit status 2 (538.465666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134: exit status 2 (532.08702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-895134 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-895134 --alsologtostderr -v=1: (1.010563975s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-895134 -n default-k8s-diff-port-895134
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-979836 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-979836 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-979836 -n newest-cni-979836
E1018 13:34:49.305800  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/bridge-857514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:49.531458  837622 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-835760/.minikube/profiles/old-k8s-version-418330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-979836 -n newest-cni-979836: exit status 2 (337.417225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-979836 -n newest-cni-979836
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-979836 -n newest-cni-979836: exit status 2 (337.924554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-979836 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-979836 -n newest-cni-979836
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-979836 -n newest-cni-979836
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-419204 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-419204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-419204
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-857514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-857514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-857514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-857514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-857514"

                                                
                                                
----------------------- debugLogs end: cilium-857514 [took: 5.356922232s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-857514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-857514
--- SKIP: TestNetworkPlugins/group/cilium (5.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-364917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-364917
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard