Test Report: Docker_Linux 21642

                    
                      14b81faeac061460adc41f1c17794999a5c5cccd:2025-09-26:41636
                    
                

Test fail (13/346)

x
+
TestAddons/serial/Volcano (208.92s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 14.592216ms
addons_test.go:876: volcano-admission stabilized in 14.63832ms
addons_test.go:868: volcano-scheduler stabilized in 14.784596ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-pthl8" [eabaa50d-a4ea-4187-9d96-e9e3a4e3ee87] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003208945s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-prpdk" [5ad75b93-0371-4858-8419-a2a9ba7bbeb7] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003699611s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-pgdlp" [7f3b0137-16de-4a4d-bc85-4f44317eddcd] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003414125s
addons_test.go:903: (dbg) Run:  kubectl --context addons-619347 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-619347 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-619347 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ea13d899-24fa-4952-96e2-f96a6e3c7beb] Pending
helpers_test.go:352: "test-job-nginx-0" [ea13d899-24fa-4952-96e2-f96a6e3c7beb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:935: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
addons_test.go:935: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-09-26 22:34:40.011184642 +0000 UTC m=+349.967167579
addons_test.go:935: (dbg) Run:  kubectl --context addons-619347 describe po test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) kubectl --context addons-619347 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             addons-619347/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:31:41 +0000
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-eea5974b-c808-46fa-854a-fd48546dd832
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               10.244.0.28
IPs:
IP:           10.244.0.28
Controlled By:  Job/test-job
Containers:
nginx:
Container ID:  
Image:         nginx:latest
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
10m
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h897f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h897f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From     Message
----     ------     ----                  ----     -------
Normal   Scheduled  2m59s                 volcano  Successfully assigned my-volcano/test-job-nginx-0 to addons-619347
Warning  Failed     2m59s                 kubelet  Failed to pull image "nginx:latest": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    89s (x4 over 2m59s)   kubelet  Pulling image "nginx:latest"
Warning  Failed     89s (x4 over 2m59s)   kubelet  Error: ErrImagePull
Warning  Failed     89s (x3 over 2m47s)   kubelet  Failed to pull image "nginx:latest": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    11s (x11 over 2m58s)  kubelet  Back-off pulling image "nginx:latest"
Warning  Failed     11s (x11 over 2m58s)  kubelet  Error: ImagePullBackOff
addons_test.go:935: (dbg) Run:  kubectl --context addons-619347 logs test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) Non-zero exit: kubectl --context addons-619347 logs test-job-nginx-0 -n my-volcano: exit status 1 (70.959813ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "test-job-nginx-0" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:935: kubectl --context addons-619347 logs test-job-nginx-0 -n my-volcano: exit status 1
addons_test.go:936: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-619347
helpers_test.go:243: (dbg) docker inspect addons-619347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	        "Created": "2025-09-26T22:29:24.504112175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1401920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:29:24.53667075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hosts",
	        "LogPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d-json.log",
	        "Name": "/addons-619347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-619347:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-619347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	                "LowerDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-619347",
	                "Source": "/var/lib/docker/volumes/addons-619347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-619347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-619347",
	                "name.minikube.sigs.k8s.io": "addons-619347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3015286d67af8b7391959f3121ca363feb45d14fa55ccdc7193de806e7fe6e96",
	            "SandboxKey": "/var/run/docker/netns/3015286d67af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-619347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:cd:cb:d7:a7:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22f06ef7f1b3d4919d623039fdb7eaef892f9c8c0a7074ff47e8c48934f6f117",
	                    "EndpointID": "4b693477b2120ec160d127bc2bc90fabb016ebf45c34df1cad9bd2399ffdc1cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-619347",
	                        "f0caa77a5878"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-619347 -n addons-619347
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 logs -n 25
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-036757 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-036757   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-036757                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-036757   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ -o=json --download-only -p download-only-040048 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-036757                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-036757   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ --download-only -p download-docker-193843 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ -p download-docker-193843                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-237584 --alsologtostderr --binary-mirror http://127.0.0.1:35911 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-237584                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ disable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:01.756585 1401287 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:01.756707 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756717 1401287 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:01.756724 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756944 1401287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:29:01.757503 1401287 out.go:368] Setting JSON to false
	I0926 22:29:01.758423 1401287 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15086,"bootTime":1758910656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:01.758529 1401287 start.go:140] virtualization: kvm guest
	I0926 22:29:01.760350 1401287 out.go:179] * [addons-619347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:01.761510 1401287 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:01.761513 1401287 notify.go:220] Checking for updates...
	I0926 22:29:01.763728 1401287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:01.765716 1401287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:01.766946 1401287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:29:01.767993 1401287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:01.768984 1401287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:01.770171 1401287 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:01.792688 1401287 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:01.792779 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.845164 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.835526355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.845273 1401287 docker.go:318] overlay module found
	I0926 22:29:01.847734 1401287 out.go:179] * Using the docker driver based on user configuration
	I0926 22:29:01.848892 1401287 start.go:304] selected driver: docker
	I0926 22:29:01.848910 1401287 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:01.848922 1401287 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:01.849577 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.899952 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.890671576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.900135 1401287 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:01.900371 1401287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:01.902046 1401287 out.go:179] * Using Docker driver with root privileges
	I0926 22:29:01.903097 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:01.903175 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:01.903186 1401287 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:01.903270 1401287 start.go:348] cluster config:
	{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0926 22:29:01.904858 1401287 out.go:179] * Starting "addons-619347" primary control-plane node in "addons-619347" cluster
	I0926 22:29:01.906044 1401287 cache.go:123] Beginning downloading kic base image for docker with docker
	I0926 22:29:01.907356 1401287 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:29:01.908297 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:01.908335 1401287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0926 22:29:01.908345 1401287 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:01.908416 1401287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:01.908443 1401287 preload.go:172] Found /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:01.908453 1401287 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0926 22:29:01.908843 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:01.908883 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json: {Name:mkc2865f84bd589b8eae2eb83eded5267684d61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:01.925224 1401287 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:01.925402 1401287 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:01.925420 1401287 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:01.925428 1401287 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:01.925435 1401287 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:01.925439 1401287 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0926 22:29:14.155592 1401287 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0926 22:29:14.155633 1401287 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:29:14.155712 1401287 start.go:360] acquireMachinesLock for addons-619347: {Name:mk16a13d35eefb90d37e67ab9d542372a6292c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:14.155829 1401287 start.go:364] duration metric: took 91.725µs to acquireMachinesLock for "addons-619347"
	I0926 22:29:14.155856 1401287 start.go:93] Provisioning new machine with config: &{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:14.155980 1401287 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:29:14.157562 1401287 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0926 22:29:14.157823 1401287 start.go:159] libmachine.API.Create for "addons-619347" (driver="docker")
	I0926 22:29:14.157858 1401287 client.go:168] LocalClient.Create starting
	I0926 22:29:14.158021 1401287 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem
	I0926 22:29:14.205932 1401287 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem
	I0926 22:29:14.366294 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:29:14.383620 1401287 cli_runner.go:211] docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:29:14.383691 1401287 network_create.go:284] running [docker network inspect addons-619347] to gather additional debugging logs...
	I0926 22:29:14.383716 1401287 cli_runner.go:164] Run: docker network inspect addons-619347
	W0926 22:29:14.399817 1401287 cli_runner.go:211] docker network inspect addons-619347 returned with exit code 1
	I0926 22:29:14.399876 1401287 network_create.go:287] error running [docker network inspect addons-619347]: docker network inspect addons-619347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-619347 not found
	I0926 22:29:14.399898 1401287 network_create.go:289] output of [docker network inspect addons-619347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-619347 not found
	
	** /stderr **
	I0926 22:29:14.400043 1401287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:14.417291 1401287 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ae9be0}
	I0926 22:29:14.417339 1401287 network_create.go:124] attempt to create docker network addons-619347 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:29:14.417382 1401287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-619347 addons-619347
	I0926 22:29:14.473127 1401287 network_create.go:108] docker network addons-619347 192.168.49.0/24 created
	I0926 22:29:14.473163 1401287 kic.go:121] calculated static IP "192.168.49.2" for the "addons-619347" container
	I0926 22:29:14.473252 1401287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:29:14.489293 1401287 cli_runner.go:164] Run: docker volume create addons-619347 --label name.minikube.sigs.k8s.io=addons-619347 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:29:14.506092 1401287 oci.go:103] Successfully created a docker volume addons-619347
	I0926 22:29:14.506161 1401287 cli_runner.go:164] Run: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:29:20.841341 1401287 cli_runner.go:217] Completed: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.335139978s)
	I0926 22:29:20.841369 1401287 oci.go:107] Successfully prepared a docker volume addons-619347
	I0926 22:29:20.841406 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:20.841430 1401287 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:29:20.841514 1401287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:29:24.436467 1401287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.594814262s)
	I0926 22:29:24.436527 1401287 kic.go:203] duration metric: took 3.595091279s to extract preloaded images to volume ...
	W0926 22:29:24.436629 1401287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:29:24.436675 1401287 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:29:24.436720 1401287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:29:24.488860 1401287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-619347 --name addons-619347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-619347 --network addons-619347 --ip 192.168.49.2 --volume addons-619347:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:29:24.739034 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Running}}
	I0926 22:29:24.756901 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:24.774535 1401287 cli_runner.go:164] Run: docker exec addons-619347 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:29:24.821732 1401287 oci.go:144] the created container "addons-619347" has a running status.
	I0926 22:29:24.821762 1401287 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa...
	I0926 22:29:25.058873 1401287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:29:25.084720 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.103222 1401287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:29:25.103256 1401287 kic_runner.go:114] Args: [docker exec --privileged addons-619347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:29:25.152057 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.171032 1401287 machine.go:93] provisionDockerMachine start ...
	I0926 22:29:25.171165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.192356 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.192770 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.192789 1401287 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:29:25.329327 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.329360 1401287 ubuntu.go:182] provisioning hostname "addons-619347"
	I0926 22:29:25.329440 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.347623 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.347852 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.347866 1401287 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-619347 && echo "addons-619347" | sudo tee /etc/hostname
	I0926 22:29:25.495671 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.495764 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.513361 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.513676 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.513706 1401287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-619347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-619347/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-619347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.648127 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.648158 1401287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-1396392/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-1396392/.minikube}
	I0926 22:29:25.648181 1401287 ubuntu.go:190] setting up certificates
	I0926 22:29:25.648194 1401287 provision.go:84] configureAuth start
	I0926 22:29:25.648256 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:25.665581 1401287 provision.go:143] copyHostCerts
	I0926 22:29:25.665655 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.665964 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.666216 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.666332 1401287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem org=jenkins.addons-619347 san=[127.0.0.1 192.168.49.2 addons-619347 localhost minikube]
	I0926 22:29:26.345521 1401287 provision.go:177] copyRemoteCerts
	I0926 22:29:26.345589 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:26.345626 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.363376 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:26.461182 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:26.487057 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:26.511222 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 22:29:26.535844 1401287 provision.go:87] duration metric: took 887.635192ms to configureAuth
	I0926 22:29:26.535878 1401287 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:29:26.536095 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:26.536165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.554135 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.554419 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.554438 1401287 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 22:29:26.690395 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0926 22:29:26.690420 1401287 ubuntu.go:71] root file system type: overlay
	I0926 22:29:26.690565 1401287 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 22:29:26.690630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.708389 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.708653 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.708753 1401287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 22:29:26.857459 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 22:29:26.857566 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.875261 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.875543 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.875567 1401287 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 22:29:27.972927 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-26 22:29:26.855075288 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0926 22:29:27.972953 1401287 machine.go:96] duration metric: took 2.801887579s to provisionDockerMachine
	I0926 22:29:27.972966 1401287 client.go:171] duration metric: took 13.815098068s to LocalClient.Create
	I0926 22:29:27.972989 1401287 start.go:167] duration metric: took 13.815166582s to libmachine.API.Create "addons-619347"
	I0926 22:29:27.972999 1401287 start.go:293] postStartSetup for "addons-619347" (driver="docker")
	I0926 22:29:27.973014 1401287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:27.973075 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:27.973123 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:27.990436 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.088898 1401287 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:28.092357 1401287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:29:28.092381 1401287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:29:28.092389 1401287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:29:28.092397 1401287 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:29:28.092411 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/addons for local assets ...
	I0926 22:29:28.092496 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/files for local assets ...
	I0926 22:29:28.092533 1401287 start.go:296] duration metric: took 119.526658ms for postStartSetup
	I0926 22:29:28.092888 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.110347 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:28.110666 1401287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:29:28.110720 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.127963 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.219507 1401287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:29:28.223820 1401287 start.go:128] duration metric: took 14.067824148s to createHost
	I0926 22:29:28.223850 1401287 start.go:83] releasing machines lock for "addons-619347", held for 14.068007272s
	I0926 22:29:28.223922 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.240598 1401287 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:28.240633 1401287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:28.240652 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.240703 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.257372 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.258797 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.423810 1401287 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:28.428533 1401287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:29:28.433038 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 22:29:28.461936 1401287 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:29:28.462028 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:28.488392 1401287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:29:28.488420 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.488455 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.488593 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.505081 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 22:29:28.516249 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 22:29:28.526291 1401287 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 22:29:28.526353 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 22:29:28.536220 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.546282 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 22:29:28.556108 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.565920 1401287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:28.575000 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 22:29:28.584684 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 22:29:28.594441 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 22:29:28.604436 1401287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:28.612926 1401287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:28.621307 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:28.686706 1401287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 22:29:28.765771 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.765825 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.765881 1401287 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 22:29:28.778235 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.789193 1401287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:28.806369 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.817718 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 22:29:28.828841 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.845391 1401287 ssh_runner.go:195] Run: which cri-dockerd
	I0926 22:29:28.848841 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 22:29:28.859051 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0926 22:29:28.876661 1401287 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 22:29:28.939711 1401287 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 22:29:29.006868 1401287 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0926 22:29:29.007006 1401287 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0926 22:29:29.025882 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0926 22:29:29.037344 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:29.102031 1401287 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 22:29:29.866941 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:29.878676 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 22:29:29.890349 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:29.901859 1401287 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 22:29:29.971712 1401287 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 22:29:30.041653 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.108440 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 22:29:30.127589 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0926 22:29:30.138450 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.204543 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 22:29:30.280240 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:30.292074 1401287 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 22:29:30.292147 1401287 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 22:29:30.295851 1401287 start.go:563] Will wait 60s for crictl version
	I0926 22:29:30.295920 1401287 ssh_runner.go:195] Run: which crictl
	I0926 22:29:30.299332 1401287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:30.334344 1401287 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0926 22:29:30.334407 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.359394 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.385840 1401287 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0926 22:29:30.385911 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:30.402657 1401287 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:30.406689 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.418124 1401287 kubeadm.go:883] updating cluster {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:30.418244 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:30.418289 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.437981 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.438007 1401287 docker.go:621] Images already preloaded, skipping extraction
	I0926 22:29:30.438061 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.457379 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.457402 1401287 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:30.457415 1401287 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0926 22:29:30.457550 1401287 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-619347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:30.457608 1401287 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 22:29:30.507568 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:30.507618 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:30.507640 1401287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:30.507666 1401287 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-619347 NodeName:addons-619347 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:30.507817 1401287 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-619347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:30.507878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:30.517618 1401287 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:30.517680 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:30.526766 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:30.544641 1401287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:30.561976 1401287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:30.579430 1401287 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:30.582806 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.593536 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.659215 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:30.680701 1401287 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347 for IP: 192.168.49.2
	I0926 22:29:30.680722 1401287 certs.go:195] generating shared ca certs ...
	I0926 22:29:30.680743 1401287 certs.go:227] acquiring lock for ca certs: {Name:mk6c7838cc2dce82903d545772166c35f6a8ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.680859 1401287 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key
	I0926 22:29:30.837572 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt ...
	I0926 22:29:30.837605 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt: {Name:mka8a7fba6c323e3efb5c337a110d874f4a069f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837797 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key ...
	I0926 22:29:30.837813 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key: {Name:mk5241bded4d58e8d730b5c39e3cb6b761b06b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837926 1401287 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key
	I0926 22:29:31.379026 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt ...
	I0926 22:29:31.379062 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt: {Name:mk0b26827e7effdc6e0cb418dab9aa237c23935e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379267 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key ...
	I0926 22:29:31.379283 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key: {Name:mkc17ee61ac662bf18733fd6087e23ac2b546ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379447 1401287 certs.go:257] generating profile certs ...
	I0926 22:29:31.379550 1401287 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key
	I0926 22:29:31.379571 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt with IP's: []
	I0926 22:29:31.863291 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt ...
	I0926 22:29:31.863331 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: {Name:mk25ddefd62aaf8d3e2f6d1fd2d519d1c2b1bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863552 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key ...
	I0926 22:29:31.863571 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key: {Name:mk8cc05aa8f2753617dfe3d2ae365690c5c6ce86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863711 1401287 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15
	I0926 22:29:31.863742 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:29:32.476987 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 ...
	I0926 22:29:32.477026 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15: {Name:mkd972c04e4a2418d910fa6a476af654883d90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477231 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 ...
	I0926 22:29:32.477251 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15: {Name:mk6e7ebd8b361ff43396ae1d43e26cc4b3fca9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477363 1401287 certs.go:382] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt
	I0926 22:29:32.477503 1401287 certs.go:386] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key
	I0926 22:29:32.477596 1401287 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key
	I0926 22:29:32.477626 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt with IP's: []
	I0926 22:29:32.537971 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt ...
	I0926 22:29:32.538009 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt: {Name:mkfbd9d4d456b434b04760e6c3778ba177b5caa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538198 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key ...
	I0926 22:29:32.538217 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key: {Name:mkdbd77fea74f3adf740a694b7d5ff5142acf56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538432 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 22:29:32.538493 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:32.538542 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:32.538584 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:32.539249 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:32.564650 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:29:32.589199 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:32.612819 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:32.636809 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:32.660922 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:32.684674 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:32.708845 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:29:32.732866 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:32.759367 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:32.777459 1401287 ssh_runner.go:195] Run: openssl version
	I0926 22:29:32.783004 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:32.794673 1401287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798422 1401287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798497 1401287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.805099 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:32.814605 1401287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:32.817944 1401287 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:32.818016 1401287 kubeadm.go:400] StartCluster: {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:32.818116 1401287 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 22:29:32.836878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:32.846020 1401287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:32.855171 1401287 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:29:32.855233 1401287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:32.863903 1401287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:32.863919 1401287 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:32.863955 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:32.872442 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:32.872518 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:32.880882 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:32.889348 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:32.889394 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:32.897735 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.906508 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:32.906558 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.915447 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:32.924534 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:32.924590 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:32.933327 1401287 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:29:32.971243 1401287 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:32.971298 1401287 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:33.008888 1401287 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:29:33.009014 1401287 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:29:33.009067 1401287 kubeadm.go:318] OS: Linux
	I0926 22:29:33.009160 1401287 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:29:33.009217 1401287 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:29:33.009313 1401287 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:29:33.009388 1401287 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:29:33.009472 1401287 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:29:33.009577 1401287 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:29:33.009649 1401287 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:29:33.009739 1401287 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:29:33.064493 1401287 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:33.064612 1401287 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:33.064736 1401287 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:33.076202 1401287 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:33.078537 1401287 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:33.078633 1401287 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:33.078712 1401287 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:33.613982 1401287 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:34.132193 1401287 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:34.241294 1401287 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:34.638661 1401287 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:34.928444 1401287 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:34.928596 1401287 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.122701 1401287 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:35.122888 1401287 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.275604 1401287 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:35.549799 1401287 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:35.689108 1401287 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:35.689184 1401287 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:35.894121 1401287 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:36.122749 1401287 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:36.401681 1401287 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:36.449466 1401287 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:36.577737 1401287 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:36.578213 1401287 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:36.581892 1401287 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:36.583521 1401287 out.go:252]   - Booting up control plane ...
	I0926 22:29:36.583635 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:36.583735 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:36.584452 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:36.594025 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:36.594112 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:36.599591 1401287 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:36.599832 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:36.599913 1401287 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:36.682320 1401287 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:36.682523 1401287 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:37.683335 1401287 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001189529s
	I0926 22:29:37.687852 1401287 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:37.687994 1401287 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:29:37.688138 1401287 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:37.688267 1401287 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:38.693325 1401287 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005328653s
	I0926 22:29:39.818196 1401287 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.130304657s
	I0926 22:29:41.690178 1401287 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002189462s
	I0926 22:29:41.702527 1401287 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:41.711408 1401287 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:41.720193 1401287 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:41.720435 1401287 kubeadm.go:318] [mark-control-plane] Marking the node addons-619347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:41.727838 1401287 kubeadm.go:318] [bootstrap-token] Using token: ydwgpt.re3mhs2qr7yfu0od
	I0926 22:29:41.729412 1401287 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:41.729554 1401287 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:41.732328 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:41.737352 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:41.740726 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:41.743207 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:41.745363 1401287 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:42.096302 1401287 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:42.513166 1401287 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:43.094717 1401287 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:43.095522 1401287 kubeadm.go:318] 
	I0926 22:29:43.095627 1401287 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:43.095642 1401287 kubeadm.go:318] 
	I0926 22:29:43.095755 1401287 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:43.095774 1401287 kubeadm.go:318] 
	I0926 22:29:43.095814 1401287 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:43.095897 1401287 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:43.095977 1401287 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:43.095986 1401287 kubeadm.go:318] 
	I0926 22:29:43.096062 1401287 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:43.096071 1401287 kubeadm.go:318] 
	I0926 22:29:43.096135 1401287 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:43.096145 1401287 kubeadm.go:318] 
	I0926 22:29:43.096220 1401287 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:43.096324 1401287 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:43.096430 1401287 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:43.096455 1401287 kubeadm.go:318] 
	I0926 22:29:43.096638 1401287 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:43.096786 1401287 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:43.096798 1401287 kubeadm.go:318] 
	I0926 22:29:43.096919 1401287 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097088 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 \
	I0926 22:29:43.097115 1401287 kubeadm.go:318] 	--control-plane 
	I0926 22:29:43.097122 1401287 kubeadm.go:318] 
	I0926 22:29:43.097214 1401287 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:43.097228 1401287 kubeadm.go:318] 
	I0926 22:29:43.097348 1401287 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097470 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 
	I0926 22:29:43.099587 1401287 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:29:43.099739 1401287 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:43.099768 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:43.099788 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:43.101355 1401287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:43.102553 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:43.112120 1401287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:43.130674 1401287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:43.130768 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.130767 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-619347 minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-619347 minikube.k8s.io/primary=true
	I0926 22:29:43.138720 1401287 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:43.217942 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.718375 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.218391 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.718337 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.219035 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.719000 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.218689 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.718531 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.218333 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.718316 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.783783 1401287 kubeadm.go:1113] duration metric: took 4.653074895s to wait for elevateKubeSystemPrivileges
	I0926 22:29:47.783815 1401287 kubeadm.go:402] duration metric: took 14.965805729s to StartCluster
	I0926 22:29:47.783835 1401287 settings.go:142] acquiring lock: {Name:mk19bb20e8e2719c9f4ae7071ba1f293bea0c47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.783943 1401287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:47.784300 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/kubeconfig: {Name:mk53eccd4814679d9dd1f60d4b668d1b7f9967e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.784499 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:47.784532 1401287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:47.784609 1401287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:47.784681 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784735 1401287 addons.go:69] Setting registry=true in profile "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784755 1401287 addons.go:69] Setting storage-provisioner=true in profile "addons-619347"
	I0926 22:29:47.784760 1401287 addons.go:238] Setting addon registry=true in "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting registry-creds=true in profile "addons-619347"
	I0926 22:29:47.784770 1401287 addons.go:238] Setting addon storage-provisioner=true in "addons-619347"
	I0926 22:29:47.784775 1401287 addons.go:238] Setting addon registry-creds=true in "addons-619347"
	I0926 22:29:47.784785 1401287 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-619347"
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784811 1401287 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784804 1401287 addons.go:69] Setting inspektor-gadget=true in profile "addons-619347"
	I0926 22:29:47.784822 1401287 addons.go:69] Setting volumesnapshots=true in profile "addons-619347"
	I0926 22:29:47.784827 1401287 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-619347"
	I0926 22:29:47.784832 1401287 addons.go:238] Setting addon inspektor-gadget=true in "addons-619347"
	I0926 22:29:47.784833 1401287 addons.go:238] Setting addon volumesnapshots=true in "addons-619347"
	I0926 22:29:47.784844 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784849 1401287 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:47.784851 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784856 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784879 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784951 1401287 addons.go:69] Setting ingress-dns=true in profile "addons-619347"
	I0926 22:29:47.784970 1401287 addons.go:69] Setting default-storageclass=true in profile "addons-619347"
	I0926 22:29:47.784958 1401287 addons.go:69] Setting gcp-auth=true in profile "addons-619347"
	I0926 22:29:47.784988 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-619347"
	I0926 22:29:47.784817 1401287 addons.go:69] Setting volcano=true in profile "addons-619347"
	I0926 22:29:47.785003 1401287 addons.go:238] Setting addon volcano=true in "addons-619347"
	I0926 22:29:47.785032 1401287 addons.go:69] Setting cloud-spanner=true in profile "addons-619347"
	I0926 22:29:47.785040 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785045 1401287 addons.go:238] Setting addon cloud-spanner=true in "addons-619347"
	I0926 22:29:47.785065 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785262 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784800 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785379 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784973 1401287 addons.go:238] Setting addon ingress-dns=true in "addons-619347"
	I0926 22:29:47.785498 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785518 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785535 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785723 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785798 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785980 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784992 1401287 mustload.go:65] Loading cluster: addons-619347
	I0926 22:29:47.784762 1401287 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-619347"
	I0926 22:29:47.787331 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784792 1401287 addons.go:69] Setting metrics-server=true in profile "addons-619347"
	I0926 22:29:47.784734 1401287 addons.go:69] Setting yakd=true in profile "addons-619347"
	I0926 22:29:47.787078 1401287 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787824 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.788010 1401287 addons.go:238] Setting addon metrics-server=true in "addons-619347"
	I0926 22:29:47.788028 1401287 addons.go:238] Setting addon yakd=true in "addons-619347"
	I0926 22:29:47.788047 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.788063 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.789412 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787118 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784734 1401287 addons.go:69] Setting ingress=true in profile "addons-619347"
	I0926 22:29:47.789936 1401287 addons.go:238] Setting addon ingress=true in "addons-619347"
	I0926 22:29:47.789980 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784814 1401287 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-619347"
	I0926 22:29:47.790231 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-619347"
	I0926 22:29:47.790451 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:47.793232 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.793847 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.802421 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.803014 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.835418 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0926 22:29:47.836021 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:47.839393 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:47.839421 1401287 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:47.840142 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.845675 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0926 22:29:47.849257 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0926 22:29:47.856053 1401287 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:47.858820 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0926 22:29:47.856545 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:47.858894 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.860040 1401287 addons.go:238] Setting addon default-storageclass=true in "addons-619347"
	I0926 22:29:47.860081 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.860516 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:47.860534 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:47.860630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.866839 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.873854 1401287 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:47.875341 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:47.875365 1401287 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:47.875428 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.882655 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:47.882749 1401287 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:47.884700 1401287 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:47.885073 1401287 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:47.885418 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:47.885504 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.884703 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:47.887232 1401287 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:47.887315 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:47.887396 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.887247 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:47.889515 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:47.892008 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:47.893405 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:47.895131 1401287 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:47.896348 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:47.896370 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:47.896434 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.897311 1401287 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-619347"
	I0926 22:29:47.897358 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.898142 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.899126 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:47.900143 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:47.902104 1401287 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:47.902740 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:47.902755 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:47.902813 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.903595 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:47.903615 1401287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:47.903685 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.911178 1401287 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:47.912616 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:47.912637 1401287 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:47.912867 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.916927 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:47.918186 1401287 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:47.919909 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.920091 1401287 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:47.920106 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:47.920166 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.921441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.922745 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.923875 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:47.923890 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:47.923943 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.926937 1401287 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:47.927973 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:47.927993 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:47.928052 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.940536 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.942062 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0926 22:29:47.945122 1401287 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:47.946248 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:47.946273 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:47.946337 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.951570 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.958865 1401287 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:47.959859 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.960450 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.961755 1401287 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:47.965573 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:47.965594 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:47.965659 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.966411 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976561 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976622 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977107 1401287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:47.977106 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977119 1401287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:47.977177 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.980224 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.984609 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.989681 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.990796 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:47.997697 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.997795 1401287 retry.go:31] will retry after 178.321817ms: ssh: handshake failed: EOF
	W0926 22:29:47.999217 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.999256 1401287 retry.go:31] will retry after 245.552991ms: ssh: handshake failed: EOF
	I0926 22:29:48.009280 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.011073 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.018912 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:48.019331 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:48.022191 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.027290 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:48.029295 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:48.029402 1401287 retry.go:31] will retry after 284.652213ms: ssh: handshake failed: EOF
	I0926 22:29:48.076445 1401287 node_ready.go:35] waiting up to 6m0s for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081001 1401287 node_ready.go:49] node "addons-619347" is "Ready"
	I0926 22:29:48.081030 1401287 node_ready.go:38] duration metric: took 4.536047ms for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081059 1401287 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:48.081111 1401287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:48.140834 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.140859 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:48.162194 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:48.165548 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:48.168900 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:48.182428 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.188630 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:48.188700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:48.201257 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:48.201282 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:48.206272 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:48.206297 1401287 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:48.207662 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:48.207682 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:48.218223 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:48.220995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:48.226298 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:48.226321 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:48.226742 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:48.226761 1401287 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:48.262874 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:48.262908 1401287 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:48.275319 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:48.275353 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:48.291538 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:48.291571 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:48.310099 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.310124 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:48.326030 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:48.326056 1401287 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:48.326064 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:48.326081 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:48.368923 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.368970 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:48.377708 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:48.377782 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:48.395824 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.409558 1401287 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:48.410568 1401287 api_server.go:72] duration metric: took 626.001878ms to wait for apiserver process to appear ...
	I0926 22:29:48.410598 1401287 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:29:48.410621 1401287 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:29:48.424990 1401287 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:29:48.427236 1401287 api_server.go:141] control plane version: v1.34.0
	I0926 22:29:48.427333 1401287 api_server.go:131] duration metric: took 16.7257ms to wait for apiserver health ...
	I0926 22:29:48.427359 1401287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:29:48.434147 1401287 system_pods.go:59] 7 kube-system pods found
	I0926 22:29:48.434185 1401287 system_pods.go:61] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434195 1401287 system_pods.go:61] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434206 1401287 system_pods.go:61] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.434221 1401287 system_pods.go:61] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.434230 1401287 system_pods.go:61] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.434237 1401287 system_pods.go:61] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.434245 1401287 system_pods.go:61] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.434254 1401287 system_pods.go:74] duration metric: took 6.877162ms to wait for pod list to return data ...
	I0926 22:29:48.434265 1401287 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:29:48.437910 1401287 default_sa.go:45] found service account: "default"
	I0926 22:29:48.437986 1401287 default_sa.go:55] duration metric: took 3.713655ms for default service account to be created ...
	I0926 22:29:48.438009 1401287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:29:48.449749 1401287 system_pods.go:86] 7 kube-system pods found
	I0926 22:29:48.449859 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449883 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449933 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.449956 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.449992 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.450028 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.450047 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.450113 1401287 retry.go:31] will retry after 220.911414ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.454420 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.454446 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:48.467995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.486003 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:48.486043 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:48.505966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:48.506005 1401287 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:48.519158 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:48.533016 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.564879 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:48.613388 1401287 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.613410 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:48.638555 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:48.678611 1401287 system_pods.go:86] 8 kube-system pods found
	I0926 22:29:48.678647 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending
	I0926 22:29:48.678660 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678669 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678691 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.678698 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:48.678709 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.678717 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.678724 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.678749 1401287 retry.go:31] will retry after 325.08055ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.694878 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:48.694910 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:48.717411 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.874966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:48.875006 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:48.915620 1401287 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-619347" context rescaled to 1 replicas
	I0926 22:29:48.947182 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:48.947278 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:49.013309 1401287 system_pods.go:86] 9 kube-system pods found
	I0926 22:29:49.013412 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.013424 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013461 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013471 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.013525 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.013537 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.013546 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.013553 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.013560 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.013636 1401287 retry.go:31] will retry after 486.746944ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.102910 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:49.102950 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:49.259460 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:49.259504 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:49.377226 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:49.377250 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:49.493928 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:49.493968 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:49.517924 1401287 system_pods.go:86] 14 kube-system pods found
	I0926 22:29:49.517990 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.518004 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518013 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518022 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.518044 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.518055 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.518063 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:49.518072 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.518081 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.518100 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.518123 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:49.518143 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:49.518154 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:49.518165 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending
	I0926 22:29:49.518211 1401287 retry.go:31] will retry after 599.651697ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.625802 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:50.130675 1401287 system_pods.go:86] 15 kube-system pods found
	I0926 22:29:50.130828 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:50.130842 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130854 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130861 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running
	I0926 22:29:50.130866 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:50.130875 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:50.130885 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:50.130892 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Running
	I0926 22:29:50.130900 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:50.130908 1401287 system_pods.go:89] "metrics-server-85b7d694d7-mjlqr" [18663e65-efc9-4e15-8dad-c4e23a7f7f18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:29:50.130924 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:50.130932 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:50.130942 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:50.130951 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:50.130958 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:29:50.130969 1401287 system_pods.go:126] duration metric: took 1.692943423s to wait for k8s-apps to be running ...
	I0926 22:29:50.130981 1401287 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:29:50.131036 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:29:50.228682 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.066443039s)
	I0926 22:29:50.228730 1401287 addons.go:479] Verifying addon ingress=true in "addons-619347"
	I0926 22:29:50.229183 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.06360117s)
	I0926 22:29:50.229277 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.06027927s)
	I0926 22:29:50.229386 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.046934043s)
	W0926 22:29:50.229417 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229439 1401287 retry.go:31] will retry after 244.753675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229506 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040735105s)
	I0926 22:29:50.229590 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040703194s)
	I0926 22:29:50.229630 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.011384775s)
	I0926 22:29:50.229674 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00860092s)
	I0926 22:29:50.229967 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.834111385s)
	I0926 22:29:50.229990 1401287 addons.go:479] Verifying addon registry=true in "addons-619347"
	I0926 22:29:50.230454 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.762415616s)
	I0926 22:29:50.230635 1401287 addons.go:479] Verifying addon metrics-server=true in "addons-619347"
	I0926 22:29:50.230518 1401287 out.go:179] * Verifying ingress addon...
	I0926 22:29:50.233574 1401287 out.go:179] * Verifying registry addon...
	I0926 22:29:50.234496 1401287 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:29:50.236422 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:29:50.239932 1401287 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:29:50.239997 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.242126 1401287 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:29:50.242195 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:50.474912 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:50.747610 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.749841 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.178335 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.659134928s)
	I0926 22:29:51.178429 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.645380917s)
	I0926 22:29:51.178600 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.613538879s)
	I0926 22:29:51.178880 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.540232302s)
	I0926 22:29:51.179022 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461568485s)
	W0926 22:29:51.179054 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.179074 1401287 retry.go:31] will retry after 372.721698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.180773 1401287 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-619347 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:29:51.223913 1401287 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.092854415s)
	I0926 22:29:51.223952 1401287 system_svc.go:56] duration metric: took 1.092967022s WaitForService to wait for kubelet
	I0926 22:29:51.223963 1401287 kubeadm.go:586] duration metric: took 3.439402099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:51.223986 1401287 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:29:51.224342 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.598487819s)
	I0926 22:29:51.224378 1401287 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:51.225939 1401287 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:29:51.228192 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:29:51.229798 1401287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:29:51.229833 1401287 node_conditions.go:123] node cpu capacity is 8
	I0926 22:29:51.229856 1401287 node_conditions.go:105] duration metric: took 5.863751ms to run NodePressure ...
	I0926 22:29:51.229880 1401287 start.go:241] waiting for startup goroutines ...
	I0926 22:29:51.234026 1401287 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:29:51.234047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.241936 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.243854 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.552700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:51.709711 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.234742831s)
	W0926 22:29:51.709760 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.709786 1401287 retry.go:31] will retry after 268.370333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.732520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.738383 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.739361 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.978851 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:52.231665 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.237879 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.238844 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:52.731592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.738117 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.739055 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.232517 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.237333 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.239471 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.731711 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.737791 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.738851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.244329 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691529274s)
	I0926 22:29:54.244428 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.26554658s)
	W0926 22:29:54.244461 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.244491 1401287 retry.go:31] will retry after 392.451192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.303455 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.303472 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.303697 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.637695 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.732408 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.737348 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.738840 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:55.209616 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.209647 1401287 retry.go:31] will retry after 748.885115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.232030 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.238153 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.239111 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.331196 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:55.331261 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.348751 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.457803 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:29:55.479373 1401287 addons.go:238] Setting addon gcp-auth=true in "addons-619347"
	I0926 22:29:55.479441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:55.479850 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:55.499515 1401287 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:29:55.499611 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.520325 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.618144 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:55.619415 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:29:55.621107 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:29:55.621131 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:29:55.643383 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:29:55.643405 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:29:55.664765 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.664789 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:29:55.685778 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.732904 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.737583 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.739755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.958754 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:56.145831 1401287 addons.go:479] Verifying addon gcp-auth=true in "addons-619347"
	I0926 22:29:56.147565 1401287 out.go:179] * Verifying gcp-auth addon...
	I0926 22:29:56.149656 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:29:56.153451 1401287 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:29:56.153473 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.234575 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:56.238524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.240547 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.753812 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.754009 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.754105 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.754175 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:29:56.846438 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:56.846489 1401287 retry.go:31] will retry after 1.306898572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:57.154380 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.257757 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:57.257867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.257914 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.653373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.731799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.738612 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.739139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.153929 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:58.154158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.231698 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.238196 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.239871 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.653423 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.732047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.737700 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.739381 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:58.876131 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:58.876169 1401287 retry.go:31] will retry after 1.510195391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:59.153627 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.231973 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.237626 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.239442 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:59.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.732199 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.737381 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.739318 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.154349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.234946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.237553 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.238970 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.387250 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.653371 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.754562 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.754718 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.754737 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:01.142390 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.142433 1401287 retry.go:31] will retry after 2.823589735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.153470 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.231864 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.238191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.238929 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:01.653817 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.732601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.738292 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.738765 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.153510 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.232061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:02.237606 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.239333 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.653691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.785100 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.785181 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.785282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.228531 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.231398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.237322 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.239087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.653658 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.754788 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.754892 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.754903 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.966722 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.154061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:04.232281 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.240238 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.653129 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:04.657965 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.657997 1401287 retry.go:31] will retry after 3.931075545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.732441 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.738568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.739156 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.153676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.231619 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.237952 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.238902 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.653858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.732363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.737932 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.739708 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.153005 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.232588 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.238508 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.238930 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.653625 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.732133 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.737660 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.739398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.153662 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.231544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.238376 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.238896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.653623 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.732168 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.737693 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.739572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.153679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.231882 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.237268 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.239112 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.589607 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:08.653128 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.732858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.737867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.739211 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.153590 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.232224 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.237615 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.239714 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:09.284897 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.284936 1401287 retry.go:31] will retry after 5.203674911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.653321 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.731879 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.737435 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.739163 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.153976 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.232225 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.237891 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.239799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.652648 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.732289 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.740552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.740620 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.153709 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.231772 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.237915 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.238911 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.653574 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.731464 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.737883 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.738742 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.154161 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.255109 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.255143 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.255266 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.653341 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.732278 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.737987 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.739675 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.152601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.231735 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.238458 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.238993 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.653963 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.732677 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.737942 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.738815 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.153349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.231707 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.238128 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.238724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.489029 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.654034 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.755687 1401287 kapi.go:107] duration metric: took 24.519261155s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:14.755725 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.755952 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.152792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:15.222551 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.222596 1401287 retry.go:31] will retry after 5.506436948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.231403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.237852 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.662260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.731552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.738097 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.154099 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.231851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.237284 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.653593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.732118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.737657 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.153191 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.232638 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.238260 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.654087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.732572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.737869 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.153497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.231724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.237938 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.653474 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.754180 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.754664 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.153672 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.231937 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.237429 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.653500 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.732332 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.737902 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.153193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.231558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.238229 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.653596 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.729807 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:20.755463 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.755497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.156185 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.232540 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.237339 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:21.506242 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.506283 1401287 retry.go:31] will retry after 16.573257161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.653673 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.746511 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.747024 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.154193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.255191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.255336 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.653679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.732249 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.765524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.153260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.232592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.237546 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.653954 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.732247 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.738249 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.153348 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.231679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.238206 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.653640 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.754172 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.754291 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.155071 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.232312 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.237762 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.654098 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.755772 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.756117 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.153020 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.232253 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.237493 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.653784 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.731755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.738149 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.153957 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.231912 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.237304 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.740418 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.740422 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.740489 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.153035 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.232351 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.253652 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.653198 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.732594 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.738617 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.153818 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.255363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.255402 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.653377 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.795403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.795568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.154437 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.255203 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.255255 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.654322 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.731875 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.738025 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.153152 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.232403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.686139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.732196 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.737642 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.153176 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.232567 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.238193 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.653520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.731607 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.738120 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.153329 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.231836 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.238090 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.653138 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.753505 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.753695 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.153545 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.232120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.237425 1401287 kapi.go:107] duration metric: took 44.002941806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:30:34.654015 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.732058 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.153560 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.232023 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.653149 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.733392 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.195661 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.294162 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.653726 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.732044 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.153456 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.231729 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.653114 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.732251 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.080636 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:38.154372 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.231375 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.653809 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.782691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:30:38.852949 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.852986 1401287 retry.go:31] will retry after 15.881899723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:39.153131 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.232352 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.653465 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.731259 1401287 kapi.go:107] duration metric: took 48.503064069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:30:40.153304 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.652405 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.153555 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.652676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.152544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.653090 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.153739 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.652905 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.153461 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.653397 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.652913 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.153414 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.652678 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.153158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.653282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.152600 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.652859 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.153593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.652792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.152790 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.652641 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.153977 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.653558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.153042 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.653062 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.153284 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.653232 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.153389 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.653118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.735407 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:55.153085 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:55.342933 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.342967 1401287 retry.go:31] will retry after 26.788650375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.653379 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.653069 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.153833 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.153701 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.653075 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.153896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.652981 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.152946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.653566 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.152984 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.653887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.153373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.654120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.153468 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.653248 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.153804 1401287 kapi.go:107] duration metric: took 1m8.004150077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:04.155559 1401287 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-619347 cluster.
	I0926 22:31:04.156826 1401287 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:04.158107 1401287 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:22.132659 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:22.704256 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0926 22:31:22.704391 1401287 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:22.706313 1401287 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, volcano, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0926 22:31:22.707981 1401287 addons.go:514] duration metric: took 1m34.923379678s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass volcano registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0926 22:31:22.708039 1401287 start.go:246] waiting for cluster config update ...
	I0926 22:31:22.708063 1401287 start.go:255] writing updated cluster config ...
	I0926 22:31:22.708371 1401287 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:22.712517 1401287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:22.716253 1401287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.720372 1401287 pod_ready.go:94] pod "coredns-66bc5c9577-qctdw" is "Ready"
	I0926 22:31:22.720398 1401287 pod_ready.go:86] duration metric: took 4.121653ms for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.722139 1401287 pod_ready.go:83] waiting for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.725796 1401287 pod_ready.go:94] pod "etcd-addons-619347" is "Ready"
	I0926 22:31:22.725814 1401287 pod_ready.go:86] duration metric: took 3.654877ms for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.727751 1401287 pod_ready.go:83] waiting for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.731230 1401287 pod_ready.go:94] pod "kube-apiserver-addons-619347" is "Ready"
	I0926 22:31:22.731252 1401287 pod_ready.go:86] duration metric: took 3.484052ms for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.733085 1401287 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.117180 1401287 pod_ready.go:94] pod "kube-controller-manager-addons-619347" is "Ready"
	I0926 22:31:23.117210 1401287 pod_ready.go:86] duration metric: took 384.107267ms for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.316538 1401287 pod_ready.go:83] waiting for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.716914 1401287 pod_ready.go:94] pod "kube-proxy-sdscg" is "Ready"
	I0926 22:31:23.716945 1401287 pod_ready.go:86] duration metric: took 400.37971ms for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.917057 1401287 pod_ready.go:83] waiting for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316600 1401287 pod_ready.go:94] pod "kube-scheduler-addons-619347" is "Ready"
	I0926 22:31:24.316631 1401287 pod_ready.go:86] duration metric: took 399.543309ms for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316645 1401287 pod_ready.go:40] duration metric: took 1.604097264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:24.363816 1401287 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:24.365720 1401287 out.go:179] * Done! kubectl is now configured to use "addons-619347" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 26 22:30:31 addons-619347 dockerd[1116]: time="2025-09-26T22:30:31.654809043Z" level=info msg="ignoring event" container=bfe36f35592f01ff612439edda9a1993b8e8027021fdac6404bceffeb375504f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:30:32 addons-619347 dockerd[1116]: time="2025-09-26T22:30:32.185705897Z" level=info msg="ignoring event" container=342e5ecae35ceb38d090fda4b0405a8e7750a2e6c80d389875ccf123a57b36b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:30:33 addons-619347 dockerd[1116]: time="2025-09-26T22:30:33.203976219Z" level=info msg="ignoring event" container=a8c51ec77f97a5fac637b6ef5be4005822a67d4a87c010d477eeb9ef66ba94f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:30:33 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:33Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.13.2@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef"
	Sep 26 22:30:33 addons-619347 dockerd[1116]: time="2025-09-26T22:30:33.645739848Z" level=warning msg="reference for unknown type: " digest="sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c" remote="registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c"
	Sep 26 22:30:34 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:34Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c"
	Sep 26 22:30:34 addons-619347 dockerd[1116]: time="2025-09-26T22:30:34.575596252Z" level=warning msg="reference for unknown type: " digest="sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5" remote="registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Sep 26 22:30:35 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:35Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5: Status: Downloaded newer image for registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Sep 26 22:30:35 addons-619347 dockerd[1116]: time="2025-09-26T22:30:35.511395631Z" level=warning msg="reference for unknown type: " digest="sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0" remote="registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 26 22:30:36 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:36Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 26 22:30:36 addons-619347 dockerd[1116]: time="2025-09-26T22:30:36.250630025Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 26 22:30:37 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:37Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 26 22:30:37 addons-619347 dockerd[1116]: time="2025-09-26T22:30:37.860060406Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 26 22:30:38 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:38Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 26 22:30:54 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d7b50889a41ebe13e51942099c1d27d4f15216e01770c2e568846a2cdbd03aa/resolv.conf as [nameserver 10.96.0.10 search volcano-system.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:30:54 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:30:54Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35"
	Sep 26 22:31:00 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:31:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52b57f2876ecbebc9968d2247bc24ce018ca8553a6a675a3bef07aeaec8e9f4f/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:31:00 addons-619347 dockerd[1116]: time="2025-09-26T22:31:00.549228563Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Sep 26 22:31:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:31:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Sep 26 22:31:41 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:31:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ece8ed8b9f23368181b432fc4d9651d8f3e4fcf1966b03618dedee5601eaecb7/resolv.conf as [nameserver 10.96.0.10 search my-volcano.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:31:41 addons-619347 dockerd[1116]: time="2025-09-26T22:31:41.753347129Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:31:41 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:31:41Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	Sep 26 22:31:53 addons-619347 dockerd[1116]: time="2025-09-26T22:31:53.403277784Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:32:22 addons-619347 dockerd[1116]: time="2025-09-26T22:32:22.423674012Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:33:11 addons-619347 dockerd[1116]: time="2025-09-26T22:33:11.401095883Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5dea4358c2bdf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 3 minutes ago       Running             gcp-auth                                 0                   52b57f2876ecb       gcp-auth-78565c9fb4-k7rcx
	7a1c6f8df1356       volcanosh/vc-webhook-manager@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35                                         3 minutes ago       Running             admission                                0                   9d7b50889a41e       volcano-admission-589c7dd587-prpdk
	ce8cf08b141fd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	7dcfe799d3773       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago       Running             csi-provisioner                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	931b17716c09b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago       Running             liveness-probe                           0                   de2617410b653       csi-hostpathplugin-rbzvs
	7d61fc01cddfd       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago       Running             hostpath                                 0                   de2617410b653       csi-hostpathplugin-rbzvs
	1dafa88bf03ff       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago       Running             node-driver-registrar                    0                   de2617410b653       csi-hostpathplugin-rbzvs
	728e0cf65646d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             4 minutes ago       Running             controller                               0                   db768efcc91e0       ingress-nginx-controller-9cc49f96f-ghq9n
	2272adc16d5b8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago       Running             csi-external-health-monitor-controller   0                   de2617410b653       csi-hostpathplugin-rbzvs
	4830d4a0f03bf       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago       Running             csi-resizer                              0                   b9ce75482df7a       csi-hostpath-resizer-0
	e37820c539b12       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago       Running             csi-attacher                             0                   84987d9e1a070       csi-hostpath-attacher-0
	6e5908937f056       volcanosh/vc-scheduler@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e                                               4 minutes ago       Running             volcano-scheduler                        0                   6c00d6de83634       volcano-scheduler-799f64f894-pthl8
	4cc3707d46bf8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   3dd9df25ed9e8       snapshot-controller-7d9fbc56b8-2zg9l
	d91e1c6dab5ec       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago       Running             volume-snapshot-controller               0                   f5d5e2661efee       snapshot-controller-7d9fbc56b8-ml295
	8e232f360c1a6       volcanosh/vc-controller-manager@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90                                      4 minutes ago       Running             volcano-controllers                      0                   af92806fa0429       volcano-controllers-7dc6969b45-pgdlp
	2be186df9d067       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              patch                                    0                   e4cb125881f09       ingress-nginx-admission-patch-65dgz
	64e745dd36107       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              create                                   0                   8c9898018e8fa       ingress-nginx-admission-create-dbtd8
	8225f70c79655       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       4 minutes ago       Running             local-path-provisioner                   0                   287d670c65c8c       local-path-provisioner-648f6765c9-mgt7q
	cb67a88c34cc5       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        4 minutes ago       Running             metrics-server                           0                   9822140b1ce77       metrics-server-85b7d694d7-mjlqr
	8afc7c86056b8       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago       Running             yakd                                     0                   9ec56d496b02b       yakd-dashboard-5ff678cb9-gcppv
	a6d48b6dd738f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            4 minutes ago       Running             gadget                                   0                   1e350b656bd65       gadget-9rfhl
	152ab0bca0d3a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              4 minutes ago       Running             registry-proxy                           0                   b551f72e238a6       registry-proxy-vs5xn
	6c95150654506       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         4 minutes ago       Running             minikube-ingress-dns                     0                   8f1cf5e8da338       kube-ingress-dns-minikube
	113de21811067       registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                                             4 minutes ago       Running             registry                                 0                   86f5d30f86393       registry-66898fdd98-gxfpk
	075ac67607cf0       nvcr.io/nvidia/k8s-device-plugin@sha256:630596340f8e83aa10b0bc13a46db76772e31b7dccfc34d3a4e41ab7e0aa6117                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   a467d003e737c       nvidia-device-plugin-daemonset-q4gzr
	4ff8a3b49dd7d       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               4 minutes ago       Running             cloud-spanner-emulator                   0                   9bc7f8c4727c5       cloud-spanner-emulator-85f6b7fc65-2hm5t
	bf27d19913ae5       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               4 minutes ago       Running             amd-gpu-device-plugin                    0                   b4bd288cc5cb1       amd-gpu-device-plugin-vs4x8
	d9822a41079f6       6e38f40d628db                                                                                                                                4 minutes ago       Running             storage-provisioner                      0                   e7dd4d41d742b       storage-provisioner
	9ea233eb6b299       52546a367cc9e                                                                                                                                4 minutes ago       Running             coredns                                  0                   3dff0fbc29922       coredns-66bc5c9577-qctdw
	227d066a100ce       df0860106674d                                                                                                                                4 minutes ago       Running             kube-proxy                               0                   9cd7f6237aa02       kube-proxy-sdscg
	f5b2050f68de5       a0af72f2ec6d6                                                                                                                                5 minutes ago       Running             kube-controller-manager                  0                   fbe20fd4325ef       kube-controller-manager-addons-619347
	8209664c099ee       46169d968e920                                                                                                                                5 minutes ago       Running             kube-scheduler                           0                   779a1e971ca62       kube-scheduler-addons-619347
	9d1b130b03b02       90550c43ad2bc                                                                                                                                5 minutes ago       Running             kube-apiserver                           0                   f2516f75f5542       kube-apiserver-addons-619347
	5ae0da6e5bfbf       5f1f5298c888d                                                                                                                                5 minutes ago       Running             etcd                                     0                   fa78f9e958055       etcd-addons-619347
	
	
	==> controller_ingress [728e0cf65646] <==
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	W0926 22:30:33.796826       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0926 22:30:33.796983       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0926 22:30:33.803331       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.0" state="clean" commit="f28b4c9efbca5c5c0af716d9f2d5702667ee8a45" platform="linux/amd64"
	I0926 22:30:34.860571       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0926 22:30:34.870879       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0926 22:30:34.880241       7 nginx.go:273] "Starting NGINX Ingress controller"
	I0926 22:30:34.886138       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"af1f95e7-79cc-4834-9dbc-25b238d52837", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0926 22:30:34.887630       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"8ae5a3fc-8742-4ced-9ecf-3e3a21a0cf7c", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0926 22:30:34.887711       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f6bd66b9-f1c6-476b-a596-e7c7ed771583", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0926 22:30:36.082816       7 nginx.go:319] "Starting NGINX process"
	I0926 22:30:36.082933       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0926 22:30:36.083217       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0926 22:30:36.083836       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:30:36.090331       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0926 22:30:36.090382       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-ghq9n"
	I0926 22:30:36.093730       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	I0926 22:30:36.132936       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:30:36.133071       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0926 22:30:36.133167       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0926 22:30:36.196634       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	
	
	==> coredns [9ea233eb6b29] <==
	[INFO] 10.244.0.8:55538 - 12444 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003125387s
	[INFO] 10.244.0.8:44508 - 52472 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000088533s
	[INFO] 10.244.0.8:44508 - 52121 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000116925s
	[INFO] 10.244.0.8:40588 - 27017 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090303s
	[INFO] 10.244.0.8:40588 - 26695 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000142751s
	[INFO] 10.244.0.8:32780 - 27322 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000091822s
	[INFO] 10.244.0.8:32780 - 26988 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130714s
	[INFO] 10.244.0.8:34268 - 17213 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132338s
	[INFO] 10.244.0.8:34268 - 16970 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009144s
	[INFO] 10.244.0.27:32935 - 45410 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000327431s
	[INFO] 10.244.0.27:49406 - 23181 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000412135s
	[INFO] 10.244.0.27:42691 - 10663 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129221s
	[INFO] 10.244.0.27:49167 - 28887 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157287s
	[INFO] 10.244.0.27:40544 - 36384 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160696s
	[INFO] 10.244.0.27:45145 - 3022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123636s
	[INFO] 10.244.0.27:57336 - 33875 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00499531s
	[INFO] 10.244.0.27:41391 - 16202 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005792959s
	[INFO] 10.244.0.27:59854 - 59303 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005004398s
	[INFO] 10.244.0.27:34824 - 56259 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005925015s
	[INFO] 10.244.0.27:36869 - 29305 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004734879s
	[INFO] 10.244.0.27:45437 - 987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00498032s
	[INFO] 10.244.0.27:47010 - 60828 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005607554s
	[INFO] 10.244.0.27:46662 - 45152 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007088447s
	[INFO] 10.244.0.27:60306 - 17345 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000925116s
	[INFO] 10.244.0.27:50259 - 39178 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001983867s
	
	
	==> describe nodes <==
	Name:               addons-619347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-619347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-619347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-619347
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-619347"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-619347
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:34:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:34:07 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:34:07 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:34:07 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:34:07 +0000   Fri, 26 Sep 2025 22:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-619347
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 0728f6ac4f7f4421b7f9eeb1f21a8502
	  System UUID:                bfe74e22-ee1d-47b3-9c54-c1f6ef287d9d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-2hm5t     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  gadget                      gadget-9rfhl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-k7rcx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-ghq9n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-vs4x8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 coredns-66bc5c9577-qctdw                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpathplugin-rbzvs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 etcd-addons-619347                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m59s
	  kube-system                 kube-apiserver-addons-619347                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-addons-619347       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-sdscg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-addons-619347                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 metrics-server-85b7d694d7-mjlqr             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-q4gzr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-66898fdd98-gxfpk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-kjmd4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-vs5xn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-2zg9l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-ml295        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-mgt7q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  my-volcano                  test-job-nginx-0                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  volcano-system              volcano-admission-589c7dd587-prpdk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  volcano-system              volcano-controllers-7dc6969b45-pgdlp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  volcano-system              volcano-scheduler-799f64f894-pthl8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gcppv              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  0 (0%)
	  memory             588Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m52s                kube-proxy       
	  Normal  Starting                 5m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m59s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m59s                kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s                kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s                kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m55s                node-controller  Node addons-619347 event: Registered Node addons-619347 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 7d e7 a4 ab c8 08 06
	[  +1.583504] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a8 90 b4 e9 ec 08 06
	[  +2.727096] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 11 5f 5c bb 18 08 06
	[  +0.077832] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 b0 60 62 2a 0f 08 06
	[  +2.140079] IPv4: martian source 10.244.0.8 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.023792] IPv4: martian source 10.244.0.8 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[  +1.345643] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	
	
	==> etcd [5ae0da6e5bfb] <==
	{"level":"warn","ts":"2025-09-26T22:29:39.359357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:39.402678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:51.875788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:51.882030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:56.750352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.699546ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:29:56.750520Z","caller":"traceutil/trace.go:172","msg":"trace[545120805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"124.835239ms","start":"2025-09-26T22:29:56.625613Z","end":"2025-09-26T22:29:56.750448Z","steps":["trace[545120805] 'range keys from in-memory index tree'  (duration: 124.657379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:29:56.750545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.950667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040237519390281 > lease_revoke:<id:70cc9988256cc037>","response":"size:29"}
	{"level":"info","ts":"2025-09-26T22:29:56.750622Z","caller":"traceutil/trace.go:172","msg":"trace[550957012] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"110.947341ms","start":"2025-09-26T22:29:56.639663Z","end":"2025-09-26T22:29:56.750610Z","steps":["trace[550957012] 'read index received'  (duration: 40.919µs)","trace[550957012] 'applied index is now lower than readState.Index'  (duration: 110.905488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:29:56.750818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.149289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/gadget/gadget-role\" limit:1 ","response":"range_response_count:1 size:929"}
	{"level":"info","ts":"2025-09-26T22:29:56.750855Z","caller":"traceutil/trace.go:172","msg":"trace[1241908482] range","detail":"{range_begin:/registry/roles/gadget/gadget-role; range_end:; response_count:1; response_revision:1029; }","duration":"111.19346ms","start":"2025-09-26T22:29:56.639653Z","end":"2025-09-26T22:29:56.750846Z","steps":["trace[1241908482] 'agreement among raft nodes before linearized reading'  (duration: 111.040998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:11.365351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T22:30:11.365445Z","caller":"traceutil/trace.go:172","msg":"trace[2098668277] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1075; }","duration":"101.279805ms","start":"2025-09-26T22:30:11.264150Z","end":"2025-09-26T22:30:11.365430Z","steps":["trace[2098668277] 'range keys from in-memory index tree'  (duration: 101.005862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:16.821834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.870731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.885825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.893998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.925127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.935713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.946969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.959548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.971710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.978983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.988574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.999879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:30:33.500976Z","caller":"traceutil/trace.go:172","msg":"trace[381421690] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"106.589616ms","start":"2025-09-26T22:30:33.394366Z","end":"2025-09-26T22:30:33.500955Z","steps":["trace[381421690] 'process raft request'  (duration: 106.446725ms)"],"step_count":1}
	
	
	==> gcp-auth [5dea4358c2bd] <==
	2025/09/26 22:31:03 GCP Auth Webhook started!
	2025/09/26 22:31:39 Ready to marshal response ...
	2025/09/26 22:31:39 Ready to write response ...
	2025/09/26 22:31:40 Ready to marshal response ...
	2025/09/26 22:31:40 Ready to write response ...
	
	
	==> kernel <==
	 22:34:41 up  4:17,  0 users,  load average: 0.35, 1.15, 1.54
	Linux addons-619347 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9d1b130b03b0] <==
	W0926 22:30:39.018366       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:40.030642       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:41.130177       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:42.145418       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:43.184711       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:44.249371       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:45.252938       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:46.279361       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:47.336685       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:48.427342       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:49.458244       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:50.535922       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:51.575570       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	I0926 22:30:52.555148       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 22:30:52.614075       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	I0926 22:30:52.848591       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 22:30:53.716894       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	W0926 22:30:54.767760       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.128.242:443: connect: connection refused
	I0926 22:31:39.625353       1 controller.go:667] quota admission added evaluator for: jobs.batch.volcano.sh
	I0926 22:31:39.642576       1 controller.go:667] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0926 22:31:55.603277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:32:14.471934       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:33:02.052776       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:33:33.724342       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:24.872751       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f5b2050f68de] <==
	I0926 22:29:46.803265       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:29:46.803305       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:29:46.804358       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:29:46.804400       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:29:46.804491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:29:46.804682       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:29:46.807417       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:29:46.807858       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 22:29:46.810795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:29:46.812891       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0926 22:29:46.818210       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0926 22:29:46.823258       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:29:49.687051       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0926 22:30:16.816012       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 22:30:16.816214       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I0926 22:30:16.816291       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I0926 22:30:16.816334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I0926 22:30:16.816371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I0926 22:30:16.816421       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I0926 22:30:16.816460       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0926 22:30:16.816551       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0926 22:30:16.831474       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0926 22:30:16.836988       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0926 22:30:18.017434       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:30:18.037609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [227d066a100c] <==
	I0926 22:29:48.632051       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:29:48.823798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:48.926913       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:48.926974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:29:48.927216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:48.966553       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:29:48.966624       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:48.976081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:48.977337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:48.977360       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:48.983888       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:48.983916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:48.984026       1 config.go:200] "Starting service config controller"
	I0926 22:29:48.984052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:48.984116       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:48.984123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:48.987589       1 config.go:309] "Starting node config controller"
	I0926 22:29:48.987610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:48.987619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:49.084696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:29:49.084764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:49.085094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8209664c099e] <==
	E0926 22:29:39.815534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:39.815639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:39.815741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:39.815842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:39.815881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:39.815924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:39.815978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:39.816083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:39.816079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:39.816116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:39.816205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:29:39.816287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:39.816442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:39.816526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.634465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:40.655555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.682056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:40.739683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:40.750044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:40.783186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:40.869968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:40.950295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:41.003301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:41.010326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0926 22:29:41.412472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:32:08 addons-619347 kubelet[2321]: E0926 22:32:08.310287    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:32:22 addons-619347 kubelet[2321]: E0926 22:32:22.426582    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Sep 26 22:32:22 addons-619347 kubelet[2321]: E0926 22:32:22.426640    2321 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Sep 26 22:32:22 addons-619347 kubelet[2321]: E0926 22:32:22.426725    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(ea13d899-24fa-4952-96e2-f96a6e3c7beb): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:32:22 addons-619347 kubelet[2321]: E0926 22:32:22.426756    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:32:33 addons-619347 kubelet[2321]: E0926 22:32:33.310932    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:32:36 addons-619347 kubelet[2321]: I0926 22:32:36.310192    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-gxfpk" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:32:46 addons-619347 kubelet[2321]: E0926 22:32:46.310690    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:32:51 addons-619347 kubelet[2321]: I0926 22:32:51.310162    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vs5xn" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:32:58 addons-619347 kubelet[2321]: E0926 22:32:58.310435    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:33:11 addons-619347 kubelet[2321]: E0926 22:33:11.403395    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Sep 26 22:33:11 addons-619347 kubelet[2321]: E0926 22:33:11.403451    2321 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Sep 26 22:33:11 addons-619347 kubelet[2321]: E0926 22:33:11.403580    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(ea13d899-24fa-4952-96e2-f96a6e3c7beb): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:33:11 addons-619347 kubelet[2321]: E0926 22:33:11.403612    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:33:25 addons-619347 kubelet[2321]: E0926 22:33:25.310551    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:33:39 addons-619347 kubelet[2321]: E0926 22:33:39.310185    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:33:51 addons-619347 kubelet[2321]: I0926 22:33:51.310363    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-gxfpk" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:33:51 addons-619347 kubelet[2321]: E0926 22:33:51.310781    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:33:59 addons-619347 kubelet[2321]: E0926 22:33:59.258305    2321 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Sep 26 22:33:59 addons-619347 kubelet[2321]: E0926 22:33:59.258420    2321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ab44b0-8ebe-4b65-831d-a4cc579401a7-gcr-creds podName:70ab44b0-8ebe-4b65-831d-a4cc579401a7 nodeName:}" failed. No retries permitted until 2025-09-26 22:36:01.258403821 +0000 UTC m=+379.030982967 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/70ab44b0-8ebe-4b65-831d-a4cc579401a7-gcr-creds") pod "registry-creds-764b6fb674-kjmd4" (UID: "70ab44b0-8ebe-4b65-831d-a4cc579401a7") : secret "registry-creds-gcr" not found
	Sep 26 22:34:05 addons-619347 kubelet[2321]: I0926 22:34:05.310149    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vs5xn" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:34:05 addons-619347 kubelet[2321]: E0926 22:34:05.310298    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:34:10 addons-619347 kubelet[2321]: E0926 22:34:10.311071    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-kjmd4" podUID="70ab44b0-8ebe-4b65-831d-a4cc579401a7"
	Sep 26 22:34:16 addons-619347 kubelet[2321]: E0926 22:34:16.310018    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	Sep 26 22:34:29 addons-619347 kubelet[2321]: E0926 22:34:29.310556    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="ea13d899-24fa-4952-96e2-f96a6e3c7beb"
	
	
	==> storage-provisioner [d9822a41079f] <==
	W0926 22:34:15.605537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:17.608514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:17.612391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:19.615913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:19.621040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:21.624150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:21.629359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:23.632057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:23.635793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:25.639232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:25.643398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:27.646755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:27.650398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:29.653925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:29.657554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:31.660823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:31.666619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:33.669334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:33.673022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:35.676325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:35.681180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:37.683993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:37.687796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:39.690460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:39.695287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
helpers_test.go:269: (dbg) Run:  kubectl --context addons-619347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz registry-creds-764b6fb674-kjmd4 test-job-nginx-0
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-619347 describe pod ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz registry-creds-764b6fb674-kjmd4 test-job-nginx-0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-619347 describe pod ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz registry-creds-764b6fb674-kjmd4 test-job-nginx-0: exit status 1 (60.88957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dbtd8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-65dgz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-kjmd4" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-619347 describe pod ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz registry-creds-764b6fb674-kjmd4 test-job-nginx-0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable volcano --alsologtostderr -v=1: (11.300202297s)
--- FAIL: TestAddons/serial/Volcano (208.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (491.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-619347 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-619347 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-619347 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-26 22:43:31.861718238 +0000 UTC m=+881.817701164
addons_test.go:252: (dbg) Run:  kubectl --context addons-619347 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-619347 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-619347/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:35:31 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.33
IPs:
IP:  10.244.0.33
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq742 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jq742:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/nginx to addons-619347
Normal   Pulling    5m17s (x5 over 7m59s)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m17s (x5 over 7m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m17s (x5 over 7m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    2m57s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m57s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-619347 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-619347 logs nginx -n default: exit status 1 (69.255177ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-619347 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-619347
helpers_test.go:243: (dbg) docker inspect addons-619347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	        "Created": "2025-09-26T22:29:24.504112175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1401920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:29:24.53667075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hosts",
	        "LogPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d-json.log",
	        "Name": "/addons-619347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-619347:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-619347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	                "LowerDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-619347",
	                "Source": "/var/lib/docker/volumes/addons-619347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-619347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-619347",
	                "name.minikube.sigs.k8s.io": "addons-619347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3015286d67af8b7391959f3121ca363feb45d14fa55ccdc7193de806e7fe6e96",
	            "SandboxKey": "/var/run/docker/netns/3015286d67af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-619347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:cd:cb:d7:a7:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22f06ef7f1b3d4919d623039fdb7eaef892f9c8c0a7074ff47e8c48934f6f117",
	                    "EndpointID": "4b693477b2120ec160d127bc2bc90fabb016ebf45c34df1cad9bd2399ffdc1cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-619347",
	                        "f0caa77a5878"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-619347 -n addons-619347
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 logs -n 25
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ --download-only -p download-docker-193843 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ -p download-docker-193843                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-237584 --alsologtostderr --binary-mirror http://127.0.0.1:35911 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-237584                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ disable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-619347 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ addons  │ addons-619347 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ enable headlamp -p addons-619347 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ ip      │ addons-619347 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                             │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:40 UTC │ 26 Sep 25 22:41 UTC │
	│ addons  │ addons-619347 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:41 UTC │ 26 Sep 25 22:41 UTC │
	│ addons  │ addons-619347 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:41 UTC │ 26 Sep 25 22:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:01.756585 1401287 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:01.756707 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756717 1401287 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:01.756724 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756944 1401287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:29:01.757503 1401287 out.go:368] Setting JSON to false
	I0926 22:29:01.758423 1401287 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15086,"bootTime":1758910656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:01.758529 1401287 start.go:140] virtualization: kvm guest
	I0926 22:29:01.760350 1401287 out.go:179] * [addons-619347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:01.761510 1401287 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:01.761513 1401287 notify.go:220] Checking for updates...
	I0926 22:29:01.763728 1401287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:01.765716 1401287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:01.766946 1401287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:29:01.767993 1401287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:01.768984 1401287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:01.770171 1401287 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:01.792688 1401287 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:01.792779 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.845164 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.835526355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.845273 1401287 docker.go:318] overlay module found
	I0926 22:29:01.847734 1401287 out.go:179] * Using the docker driver based on user configuration
	I0926 22:29:01.848892 1401287 start.go:304] selected driver: docker
	I0926 22:29:01.848910 1401287 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:01.848922 1401287 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:01.849577 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.899952 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.890671576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.900135 1401287 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:01.900371 1401287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:01.902046 1401287 out.go:179] * Using Docker driver with root privileges
	I0926 22:29:01.903097 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:01.903175 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:01.903186 1401287 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:01.903270 1401287 start.go:348] cluster config:
	{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0926 22:29:01.904858 1401287 out.go:179] * Starting "addons-619347" primary control-plane node in "addons-619347" cluster
	I0926 22:29:01.906044 1401287 cache.go:123] Beginning downloading kic base image for docker with docker
	I0926 22:29:01.907356 1401287 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:29:01.908297 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:01.908335 1401287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0926 22:29:01.908345 1401287 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:01.908416 1401287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:01.908443 1401287 preload.go:172] Found /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:01.908453 1401287 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0926 22:29:01.908843 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:01.908883 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json: {Name:mkc2865f84bd589b8eae2eb83eded5267684d61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:01.925224 1401287 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:01.925402 1401287 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:01.925420 1401287 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:01.925428 1401287 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:01.925435 1401287 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:01.925439 1401287 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0926 22:29:14.155592 1401287 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0926 22:29:14.155633 1401287 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:29:14.155712 1401287 start.go:360] acquireMachinesLock for addons-619347: {Name:mk16a13d35eefb90d37e67ab9d542372a6292c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:14.155829 1401287 start.go:364] duration metric: took 91.725µs to acquireMachinesLock for "addons-619347"
	I0926 22:29:14.155856 1401287 start.go:93] Provisioning new machine with config: &{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:14.155980 1401287 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:29:14.157562 1401287 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0926 22:29:14.157823 1401287 start.go:159] libmachine.API.Create for "addons-619347" (driver="docker")
	I0926 22:29:14.157858 1401287 client.go:168] LocalClient.Create starting
	I0926 22:29:14.158021 1401287 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem
	I0926 22:29:14.205932 1401287 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem
	I0926 22:29:14.366294 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:29:14.383620 1401287 cli_runner.go:211] docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:29:14.383691 1401287 network_create.go:284] running [docker network inspect addons-619347] to gather additional debugging logs...
	I0926 22:29:14.383716 1401287 cli_runner.go:164] Run: docker network inspect addons-619347
	W0926 22:29:14.399817 1401287 cli_runner.go:211] docker network inspect addons-619347 returned with exit code 1
	I0926 22:29:14.399876 1401287 network_create.go:287] error running [docker network inspect addons-619347]: docker network inspect addons-619347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-619347 not found
	I0926 22:29:14.399898 1401287 network_create.go:289] output of [docker network inspect addons-619347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-619347 not found
	
	** /stderr **
	I0926 22:29:14.400043 1401287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:14.417291 1401287 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ae9be0}
	I0926 22:29:14.417339 1401287 network_create.go:124] attempt to create docker network addons-619347 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:29:14.417382 1401287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-619347 addons-619347
	I0926 22:29:14.473127 1401287 network_create.go:108] docker network addons-619347 192.168.49.0/24 created
	I0926 22:29:14.473163 1401287 kic.go:121] calculated static IP "192.168.49.2" for the "addons-619347" container
	I0926 22:29:14.473252 1401287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:29:14.489293 1401287 cli_runner.go:164] Run: docker volume create addons-619347 --label name.minikube.sigs.k8s.io=addons-619347 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:29:14.506092 1401287 oci.go:103] Successfully created a docker volume addons-619347
	I0926 22:29:14.506161 1401287 cli_runner.go:164] Run: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:29:20.841341 1401287 cli_runner.go:217] Completed: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.335139978s)
	I0926 22:29:20.841369 1401287 oci.go:107] Successfully prepared a docker volume addons-619347
	I0926 22:29:20.841406 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:20.841430 1401287 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:29:20.841514 1401287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:29:24.436467 1401287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.594814262s)
	I0926 22:29:24.436527 1401287 kic.go:203] duration metric: took 3.595091279s to extract preloaded images to volume ...
	W0926 22:29:24.436629 1401287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:29:24.436675 1401287 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:29:24.436720 1401287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:29:24.488860 1401287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-619347 --name addons-619347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-619347 --network addons-619347 --ip 192.168.49.2 --volume addons-619347:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:29:24.739034 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Running}}
	I0926 22:29:24.756901 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:24.774535 1401287 cli_runner.go:164] Run: docker exec addons-619347 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:29:24.821732 1401287 oci.go:144] the created container "addons-619347" has a running status.
	I0926 22:29:24.821762 1401287 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa...
	I0926 22:29:25.058873 1401287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:29:25.084720 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.103222 1401287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:29:25.103256 1401287 kic_runner.go:114] Args: [docker exec --privileged addons-619347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:29:25.152057 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.171032 1401287 machine.go:93] provisionDockerMachine start ...
	I0926 22:29:25.171165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.192356 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.192770 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.192789 1401287 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:29:25.329327 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.329360 1401287 ubuntu.go:182] provisioning hostname "addons-619347"
	I0926 22:29:25.329440 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.347623 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.347852 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.347866 1401287 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-619347 && echo "addons-619347" | sudo tee /etc/hostname
	I0926 22:29:25.495671 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.495764 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.513361 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.513676 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.513706 1401287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-619347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-619347/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-619347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.648127 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.648158 1401287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-1396392/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-1396392/.minikube}
	I0926 22:29:25.648181 1401287 ubuntu.go:190] setting up certificates
	I0926 22:29:25.648194 1401287 provision.go:84] configureAuth start
	I0926 22:29:25.648256 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:25.665581 1401287 provision.go:143] copyHostCerts
	I0926 22:29:25.665655 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.665964 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.666216 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.666332 1401287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem org=jenkins.addons-619347 san=[127.0.0.1 192.168.49.2 addons-619347 localhost minikube]
	I0926 22:29:26.345521 1401287 provision.go:177] copyRemoteCerts
	I0926 22:29:26.345589 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:26.345626 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.363376 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:26.461182 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:26.487057 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:26.511222 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 22:29:26.535844 1401287 provision.go:87] duration metric: took 887.635192ms to configureAuth
	I0926 22:29:26.535878 1401287 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:29:26.536095 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:26.536165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.554135 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.554419 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.554438 1401287 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 22:29:26.690395 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0926 22:29:26.690420 1401287 ubuntu.go:71] root file system type: overlay
	I0926 22:29:26.690565 1401287 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 22:29:26.690630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.708389 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.708653 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.708753 1401287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 22:29:26.857459 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 22:29:26.857566 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.875261 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.875543 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.875567 1401287 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 22:29:27.972927 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-26 22:29:26.855075288 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0926 22:29:27.972953 1401287 machine.go:96] duration metric: took 2.801887579s to provisionDockerMachine
	I0926 22:29:27.972966 1401287 client.go:171] duration metric: took 13.815098068s to LocalClient.Create
	I0926 22:29:27.972989 1401287 start.go:167] duration metric: took 13.815166582s to libmachine.API.Create "addons-619347"
	I0926 22:29:27.972999 1401287 start.go:293] postStartSetup for "addons-619347" (driver="docker")
	I0926 22:29:27.973014 1401287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:27.973075 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:27.973123 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:27.990436 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.088898 1401287 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:28.092357 1401287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:29:28.092381 1401287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:29:28.092389 1401287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:29:28.092397 1401287 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:29:28.092411 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/addons for local assets ...
	I0926 22:29:28.092496 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/files for local assets ...
	I0926 22:29:28.092533 1401287 start.go:296] duration metric: took 119.526658ms for postStartSetup
	I0926 22:29:28.092888 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.110347 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:28.110666 1401287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:29:28.110720 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.127963 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.219507 1401287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:29:28.223820 1401287 start.go:128] duration metric: took 14.067824148s to createHost
	I0926 22:29:28.223850 1401287 start.go:83] releasing machines lock for "addons-619347", held for 14.068007272s
	I0926 22:29:28.223922 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.240598 1401287 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:28.240633 1401287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:28.240652 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.240703 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.257372 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.258797 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.423810 1401287 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:28.428533 1401287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:29:28.433038 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 22:29:28.461936 1401287 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:29:28.462028 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:28.488392 1401287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:29:28.488420 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.488455 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.488593 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.505081 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 22:29:28.516249 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 22:29:28.526291 1401287 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 22:29:28.526353 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 22:29:28.536220 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.546282 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 22:29:28.556108 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.565920 1401287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:28.575000 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 22:29:28.584684 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 22:29:28.594441 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 22:29:28.604436 1401287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:28.612926 1401287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:28.621307 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:28.686706 1401287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 22:29:28.765771 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.765825 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.765881 1401287 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 22:29:28.778235 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.789193 1401287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:28.806369 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.817718 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 22:29:28.828841 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.845391 1401287 ssh_runner.go:195] Run: which cri-dockerd
	I0926 22:29:28.848841 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 22:29:28.859051 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0926 22:29:28.876661 1401287 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 22:29:28.939711 1401287 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 22:29:29.006868 1401287 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0926 22:29:29.007006 1401287 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0926 22:29:29.025882 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0926 22:29:29.037344 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:29.102031 1401287 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 22:29:29.866941 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:29.878676 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 22:29:29.890349 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:29.901859 1401287 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 22:29:29.971712 1401287 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 22:29:30.041653 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.108440 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 22:29:30.127589 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0926 22:29:30.138450 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.204543 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 22:29:30.280240 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:30.292074 1401287 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 22:29:30.292147 1401287 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 22:29:30.295851 1401287 start.go:563] Will wait 60s for crictl version
	I0926 22:29:30.295920 1401287 ssh_runner.go:195] Run: which crictl
	I0926 22:29:30.299332 1401287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:30.334344 1401287 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0926 22:29:30.334407 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.359394 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.385840 1401287 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0926 22:29:30.385911 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:30.402657 1401287 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:30.406689 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.418124 1401287 kubeadm.go:883] updating cluster {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:30.418244 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:30.418289 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.437981 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.438007 1401287 docker.go:621] Images already preloaded, skipping extraction
	I0926 22:29:30.438061 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.457379 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.457402 1401287 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:30.457415 1401287 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0926 22:29:30.457550 1401287 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-619347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:30.457608 1401287 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 22:29:30.507568 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:30.507618 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:30.507640 1401287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:30.507666 1401287 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-619347 NodeName:addons-619347 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:30.507817 1401287 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-619347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:30.507878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:30.517618 1401287 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:30.517680 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:30.526766 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:30.544641 1401287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:30.561976 1401287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:30.579430 1401287 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:30.582806 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.593536 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.659215 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:30.680701 1401287 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347 for IP: 192.168.49.2
	I0926 22:29:30.680722 1401287 certs.go:195] generating shared ca certs ...
	I0926 22:29:30.680743 1401287 certs.go:227] acquiring lock for ca certs: {Name:mk6c7838cc2dce82903d545772166c35f6a8ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.680859 1401287 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key
	I0926 22:29:30.837572 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt ...
	I0926 22:29:30.837605 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt: {Name:mka8a7fba6c323e3efb5c337a110d874f4a069f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837797 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key ...
	I0926 22:29:30.837813 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key: {Name:mk5241bded4d58e8d730b5c39e3cb6b761b06b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837926 1401287 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key
	I0926 22:29:31.379026 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt ...
	I0926 22:29:31.379062 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt: {Name:mk0b26827e7effdc6e0cb418dab9aa237c23935e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379267 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key ...
	I0926 22:29:31.379283 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key: {Name:mkc17ee61ac662bf18733fd6087e23ac2b546ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379447 1401287 certs.go:257] generating profile certs ...
	I0926 22:29:31.379550 1401287 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key
	I0926 22:29:31.379571 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt with IP's: []
	I0926 22:29:31.863291 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt ...
	I0926 22:29:31.863331 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: {Name:mk25ddefd62aaf8d3e2f6d1fd2d519d1c2b1bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863552 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key ...
	I0926 22:29:31.863571 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key: {Name:mk8cc05aa8f2753617dfe3d2ae365690c5c6ce86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863711 1401287 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15
	I0926 22:29:31.863742 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:29:32.476987 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 ...
	I0926 22:29:32.477026 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15: {Name:mkd972c04e4a2418d910fa6a476af654883d90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477231 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 ...
	I0926 22:29:32.477251 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15: {Name:mk6e7ebd8b361ff43396ae1d43e26cc4b3fca9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477363 1401287 certs.go:382] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt
	I0926 22:29:32.477503 1401287 certs.go:386] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key
	I0926 22:29:32.477596 1401287 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key
	I0926 22:29:32.477626 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt with IP's: []
	I0926 22:29:32.537971 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt ...
	I0926 22:29:32.538009 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt: {Name:mkfbd9d4d456b434b04760e6c3778ba177b5caa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538198 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key ...
	I0926 22:29:32.538217 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key: {Name:mkdbd77fea74f3adf740a694b7d5ff5142acf56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538432 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 22:29:32.538493 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:32.538542 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:32.538584 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:32.539249 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:32.564650 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:29:32.589199 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:32.612819 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:32.636809 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:32.660922 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:32.684674 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:32.708845 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:29:32.732866 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:32.759367 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:32.777459 1401287 ssh_runner.go:195] Run: openssl version
	I0926 22:29:32.783004 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:32.794673 1401287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798422 1401287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798497 1401287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.805099 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:32.814605 1401287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:32.817944 1401287 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:32.818016 1401287 kubeadm.go:400] StartCluster: {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:32.818116 1401287 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 22:29:32.836878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:32.846020 1401287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:32.855171 1401287 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:29:32.855233 1401287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:32.863903 1401287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:32.863919 1401287 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:32.863955 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:32.872442 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:32.872518 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:32.880882 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:32.889348 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:32.889394 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:32.897735 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.906508 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:32.906558 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.915447 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:32.924534 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:32.924590 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:32.933327 1401287 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:29:32.971243 1401287 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:32.971298 1401287 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:33.008888 1401287 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:29:33.009014 1401287 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:29:33.009067 1401287 kubeadm.go:318] OS: Linux
	I0926 22:29:33.009160 1401287 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:29:33.009217 1401287 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:29:33.009313 1401287 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:29:33.009388 1401287 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:29:33.009472 1401287 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:29:33.009577 1401287 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:29:33.009649 1401287 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:29:33.009739 1401287 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:29:33.064493 1401287 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:33.064612 1401287 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:33.064736 1401287 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:33.076202 1401287 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:33.078537 1401287 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:33.078633 1401287 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:33.078712 1401287 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:33.613982 1401287 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:34.132193 1401287 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:34.241294 1401287 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:34.638661 1401287 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:34.928444 1401287 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:34.928596 1401287 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.122701 1401287 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:35.122888 1401287 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.275604 1401287 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:35.549799 1401287 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:35.689108 1401287 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:35.689184 1401287 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:35.894121 1401287 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:36.122749 1401287 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:36.401681 1401287 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:36.449466 1401287 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:36.577737 1401287 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:36.578213 1401287 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:36.581892 1401287 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:36.583521 1401287 out.go:252]   - Booting up control plane ...
	I0926 22:29:36.583635 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:36.583735 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:36.584452 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:36.594025 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:36.594112 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:36.599591 1401287 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:36.599832 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:36.599913 1401287 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:36.682320 1401287 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:36.682523 1401287 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:37.683335 1401287 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001189529s
	I0926 22:29:37.687852 1401287 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:37.687994 1401287 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:29:37.688138 1401287 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:37.688267 1401287 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:38.693325 1401287 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005328653s
	I0926 22:29:39.818196 1401287 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.130304657s
	I0926 22:29:41.690178 1401287 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002189462s
	I0926 22:29:41.702527 1401287 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:41.711408 1401287 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:41.720193 1401287 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:41.720435 1401287 kubeadm.go:318] [mark-control-plane] Marking the node addons-619347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:41.727838 1401287 kubeadm.go:318] [bootstrap-token] Using token: ydwgpt.re3mhs2qr7yfu0od
	I0926 22:29:41.729412 1401287 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:41.729554 1401287 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:41.732328 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:41.737352 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:41.740726 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:41.743207 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:41.745363 1401287 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:42.096302 1401287 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:42.513166 1401287 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:43.094717 1401287 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:43.095522 1401287 kubeadm.go:318] 
	I0926 22:29:43.095627 1401287 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:43.095642 1401287 kubeadm.go:318] 
	I0926 22:29:43.095755 1401287 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:43.095774 1401287 kubeadm.go:318] 
	I0926 22:29:43.095814 1401287 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:43.095897 1401287 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:43.095977 1401287 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:43.095986 1401287 kubeadm.go:318] 
	I0926 22:29:43.096062 1401287 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:43.096071 1401287 kubeadm.go:318] 
	I0926 22:29:43.096135 1401287 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:43.096145 1401287 kubeadm.go:318] 
	I0926 22:29:43.096220 1401287 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:43.096324 1401287 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:43.096430 1401287 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:43.096455 1401287 kubeadm.go:318] 
	I0926 22:29:43.096638 1401287 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:43.096786 1401287 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:43.096798 1401287 kubeadm.go:318] 
	I0926 22:29:43.096919 1401287 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097088 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 \
	I0926 22:29:43.097115 1401287 kubeadm.go:318] 	--control-plane 
	I0926 22:29:43.097122 1401287 kubeadm.go:318] 
	I0926 22:29:43.097214 1401287 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:43.097228 1401287 kubeadm.go:318] 
	I0926 22:29:43.097348 1401287 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097470 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 
	I0926 22:29:43.099587 1401287 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:29:43.099739 1401287 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:43.099768 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:43.099788 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:43.101355 1401287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:43.102553 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:43.112120 1401287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:43.130674 1401287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:43.130768 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.130767 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-619347 minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-619347 minikube.k8s.io/primary=true
	I0926 22:29:43.138720 1401287 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:43.217942 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.718375 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.218391 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.718337 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.219035 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.719000 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.218689 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.718531 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.218333 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.718316 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.783783 1401287 kubeadm.go:1113] duration metric: took 4.653074895s to wait for elevateKubeSystemPrivileges
	I0926 22:29:47.783815 1401287 kubeadm.go:402] duration metric: took 14.965805729s to StartCluster
	I0926 22:29:47.783835 1401287 settings.go:142] acquiring lock: {Name:mk19bb20e8e2719c9f4ae7071ba1f293bea0c47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.783943 1401287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:47.784300 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/kubeconfig: {Name:mk53eccd4814679d9dd1f60d4b668d1b7f9967e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.784499 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:47.784532 1401287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:47.784609 1401287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:47.784681 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784735 1401287 addons.go:69] Setting registry=true in profile "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784755 1401287 addons.go:69] Setting storage-provisioner=true in profile "addons-619347"
	I0926 22:29:47.784760 1401287 addons.go:238] Setting addon registry=true in "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting registry-creds=true in profile "addons-619347"
	I0926 22:29:47.784770 1401287 addons.go:238] Setting addon storage-provisioner=true in "addons-619347"
	I0926 22:29:47.784775 1401287 addons.go:238] Setting addon registry-creds=true in "addons-619347"
	I0926 22:29:47.784785 1401287 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-619347"
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784811 1401287 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784804 1401287 addons.go:69] Setting inspektor-gadget=true in profile "addons-619347"
	I0926 22:29:47.784822 1401287 addons.go:69] Setting volumesnapshots=true in profile "addons-619347"
	I0926 22:29:47.784827 1401287 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-619347"
	I0926 22:29:47.784832 1401287 addons.go:238] Setting addon inspektor-gadget=true in "addons-619347"
	I0926 22:29:47.784833 1401287 addons.go:238] Setting addon volumesnapshots=true in "addons-619347"
	I0926 22:29:47.784844 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784849 1401287 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:47.784851 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784856 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784879 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784951 1401287 addons.go:69] Setting ingress-dns=true in profile "addons-619347"
	I0926 22:29:47.784970 1401287 addons.go:69] Setting default-storageclass=true in profile "addons-619347"
	I0926 22:29:47.784958 1401287 addons.go:69] Setting gcp-auth=true in profile "addons-619347"
	I0926 22:29:47.784988 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-619347"
	I0926 22:29:47.784817 1401287 addons.go:69] Setting volcano=true in profile "addons-619347"
	I0926 22:29:47.785003 1401287 addons.go:238] Setting addon volcano=true in "addons-619347"
	I0926 22:29:47.785032 1401287 addons.go:69] Setting cloud-spanner=true in profile "addons-619347"
	I0926 22:29:47.785040 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785045 1401287 addons.go:238] Setting addon cloud-spanner=true in "addons-619347"
	I0926 22:29:47.785065 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785262 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784800 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785379 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784973 1401287 addons.go:238] Setting addon ingress-dns=true in "addons-619347"
	I0926 22:29:47.785498 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785518 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785535 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785723 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785798 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785980 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784992 1401287 mustload.go:65] Loading cluster: addons-619347
	I0926 22:29:47.784762 1401287 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-619347"
	I0926 22:29:47.787331 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784792 1401287 addons.go:69] Setting metrics-server=true in profile "addons-619347"
	I0926 22:29:47.784734 1401287 addons.go:69] Setting yakd=true in profile "addons-619347"
	I0926 22:29:47.787078 1401287 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787824 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.788010 1401287 addons.go:238] Setting addon metrics-server=true in "addons-619347"
	I0926 22:29:47.788028 1401287 addons.go:238] Setting addon yakd=true in "addons-619347"
	I0926 22:29:47.788047 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.788063 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.789412 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787118 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784734 1401287 addons.go:69] Setting ingress=true in profile "addons-619347"
	I0926 22:29:47.789936 1401287 addons.go:238] Setting addon ingress=true in "addons-619347"
	I0926 22:29:47.789980 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784814 1401287 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-619347"
	I0926 22:29:47.790231 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-619347"
	I0926 22:29:47.790451 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:47.793232 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.793847 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.802421 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.803014 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.835418 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0926 22:29:47.836021 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:47.839393 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:47.839421 1401287 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:47.840142 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.845675 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0926 22:29:47.849257 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0926 22:29:47.856053 1401287 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:47.858820 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0926 22:29:47.856545 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:47.858894 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.860040 1401287 addons.go:238] Setting addon default-storageclass=true in "addons-619347"
	I0926 22:29:47.860081 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.860516 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:47.860534 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:47.860630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.866839 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.873854 1401287 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:47.875341 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:47.875365 1401287 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:47.875428 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.882655 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:47.882749 1401287 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:47.884700 1401287 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:47.885073 1401287 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:47.885418 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:47.885504 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.884703 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:47.887232 1401287 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:47.887315 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:47.887396 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.887247 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:47.889515 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:47.892008 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:47.893405 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:47.895131 1401287 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:47.896348 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:47.896370 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:47.896434 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.897311 1401287 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-619347"
	I0926 22:29:47.897358 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.898142 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.899126 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:47.900143 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:47.902104 1401287 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:47.902740 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:47.902755 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:47.902813 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.903595 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:47.903615 1401287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:47.903685 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.911178 1401287 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:47.912616 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:47.912637 1401287 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:47.912867 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.916927 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:47.918186 1401287 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:47.919909 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.920091 1401287 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:47.920106 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:47.920166 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.921441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.922745 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.923875 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:47.923890 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:47.923943 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.926937 1401287 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:47.927973 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:47.927993 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:47.928052 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.940536 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.942062 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0926 22:29:47.945122 1401287 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:47.946248 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:47.946273 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:47.946337 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.951570 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.958865 1401287 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:47.959859 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.960450 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.961755 1401287 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:47.965573 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:47.965594 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:47.965659 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.966411 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976561 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976622 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977107 1401287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:47.977106 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977119 1401287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:47.977177 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.980224 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.984609 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.989681 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.990796 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:47.997697 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.997795 1401287 retry.go:31] will retry after 178.321817ms: ssh: handshake failed: EOF
	W0926 22:29:47.999217 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.999256 1401287 retry.go:31] will retry after 245.552991ms: ssh: handshake failed: EOF
	I0926 22:29:48.009280 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.011073 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.018912 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:48.019331 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:48.022191 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.027290 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:48.029295 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:48.029402 1401287 retry.go:31] will retry after 284.652213ms: ssh: handshake failed: EOF
	I0926 22:29:48.076445 1401287 node_ready.go:35] waiting up to 6m0s for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081001 1401287 node_ready.go:49] node "addons-619347" is "Ready"
	I0926 22:29:48.081030 1401287 node_ready.go:38] duration metric: took 4.536047ms for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081059 1401287 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:48.081111 1401287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:48.140834 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.140859 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:48.162194 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:48.165548 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:48.168900 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:48.182428 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.188630 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:48.188700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:48.201257 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:48.201282 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:48.206272 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:48.206297 1401287 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:48.207662 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:48.207682 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:48.218223 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:48.220995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:48.226298 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:48.226321 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:48.226742 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:48.226761 1401287 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:48.262874 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:48.262908 1401287 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:48.275319 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:48.275353 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:48.291538 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:48.291571 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:48.310099 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.310124 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:48.326030 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:48.326056 1401287 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:48.326064 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:48.326081 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:48.368923 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.368970 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:48.377708 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:48.377782 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:48.395824 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.409558 1401287 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:48.410568 1401287 api_server.go:72] duration metric: took 626.001878ms to wait for apiserver process to appear ...
	I0926 22:29:48.410598 1401287 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:29:48.410621 1401287 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:29:48.424990 1401287 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:29:48.427236 1401287 api_server.go:141] control plane version: v1.34.0
	I0926 22:29:48.427333 1401287 api_server.go:131] duration metric: took 16.7257ms to wait for apiserver health ...
	I0926 22:29:48.427359 1401287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:29:48.434147 1401287 system_pods.go:59] 7 kube-system pods found
	I0926 22:29:48.434185 1401287 system_pods.go:61] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434195 1401287 system_pods.go:61] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434206 1401287 system_pods.go:61] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.434221 1401287 system_pods.go:61] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.434230 1401287 system_pods.go:61] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.434237 1401287 system_pods.go:61] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.434245 1401287 system_pods.go:61] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.434254 1401287 system_pods.go:74] duration metric: took 6.877162ms to wait for pod list to return data ...
	I0926 22:29:48.434265 1401287 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:29:48.437910 1401287 default_sa.go:45] found service account: "default"
	I0926 22:29:48.437986 1401287 default_sa.go:55] duration metric: took 3.713655ms for default service account to be created ...
	I0926 22:29:48.438009 1401287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:29:48.449749 1401287 system_pods.go:86] 7 kube-system pods found
	I0926 22:29:48.449859 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449883 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449933 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.449956 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.449992 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.450028 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.450047 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.450113 1401287 retry.go:31] will retry after 220.911414ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.454420 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.454446 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:48.467995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.486003 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:48.486043 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:48.505966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:48.506005 1401287 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:48.519158 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:48.533016 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.564879 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:48.613388 1401287 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.613410 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:48.638555 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:48.678611 1401287 system_pods.go:86] 8 kube-system pods found
	I0926 22:29:48.678647 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending
	I0926 22:29:48.678660 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678669 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678691 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.678698 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:48.678709 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.678717 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.678724 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.678749 1401287 retry.go:31] will retry after 325.08055ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.694878 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:48.694910 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:48.717411 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.874966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:48.875006 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:48.915620 1401287 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-619347" context rescaled to 1 replicas
	I0926 22:29:48.947182 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:48.947278 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:49.013309 1401287 system_pods.go:86] 9 kube-system pods found
	I0926 22:29:49.013412 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.013424 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013461 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013471 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.013525 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.013537 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.013546 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.013553 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.013560 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.013636 1401287 retry.go:31] will retry after 486.746944ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.102910 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:49.102950 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:49.259460 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:49.259504 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:49.377226 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:49.377250 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:49.493928 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:49.493968 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:49.517924 1401287 system_pods.go:86] 14 kube-system pods found
	I0926 22:29:49.517990 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.518004 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518013 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518022 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.518044 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.518055 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.518063 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:49.518072 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.518081 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.518100 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.518123 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:49.518143 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:49.518154 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:49.518165 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending
	I0926 22:29:49.518211 1401287 retry.go:31] will retry after 599.651697ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.625802 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:50.130675 1401287 system_pods.go:86] 15 kube-system pods found
	I0926 22:29:50.130828 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:50.130842 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130854 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130861 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running
	I0926 22:29:50.130866 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:50.130875 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:50.130885 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:50.130892 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Running
	I0926 22:29:50.130900 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:50.130908 1401287 system_pods.go:89] "metrics-server-85b7d694d7-mjlqr" [18663e65-efc9-4e15-8dad-c4e23a7f7f18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:29:50.130924 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:50.130932 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:50.130942 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:50.130951 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:50.130958 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:29:50.130969 1401287 system_pods.go:126] duration metric: took 1.692943423s to wait for k8s-apps to be running ...
	I0926 22:29:50.130981 1401287 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:29:50.131036 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:29:50.228682 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.066443039s)
	I0926 22:29:50.228730 1401287 addons.go:479] Verifying addon ingress=true in "addons-619347"
	I0926 22:29:50.229183 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.06360117s)
	I0926 22:29:50.229277 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.06027927s)
	I0926 22:29:50.229386 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.046934043s)
	W0926 22:29:50.229417 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229439 1401287 retry.go:31] will retry after 244.753675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229506 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040735105s)
	I0926 22:29:50.229590 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040703194s)
	I0926 22:29:50.229630 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.011384775s)
	I0926 22:29:50.229674 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00860092s)
	I0926 22:29:50.229967 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.834111385s)
	I0926 22:29:50.229990 1401287 addons.go:479] Verifying addon registry=true in "addons-619347"
	I0926 22:29:50.230454 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.762415616s)
	I0926 22:29:50.230635 1401287 addons.go:479] Verifying addon metrics-server=true in "addons-619347"
	I0926 22:29:50.230518 1401287 out.go:179] * Verifying ingress addon...
	I0926 22:29:50.233574 1401287 out.go:179] * Verifying registry addon...
	I0926 22:29:50.234496 1401287 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:29:50.236422 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:29:50.239932 1401287 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:29:50.239997 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.242126 1401287 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:29:50.242195 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:50.474912 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:50.747610 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.749841 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.178335 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.659134928s)
	I0926 22:29:51.178429 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.645380917s)
	I0926 22:29:51.178600 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.613538879s)
	I0926 22:29:51.178880 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.540232302s)
	I0926 22:29:51.179022 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461568485s)
	W0926 22:29:51.179054 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.179074 1401287 retry.go:31] will retry after 372.721698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.180773 1401287 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-619347 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:29:51.223913 1401287 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.092854415s)
	I0926 22:29:51.223952 1401287 system_svc.go:56] duration metric: took 1.092967022s WaitForService to wait for kubelet
	I0926 22:29:51.223963 1401287 kubeadm.go:586] duration metric: took 3.439402099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:51.223986 1401287 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:29:51.224342 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.598487819s)
	I0926 22:29:51.224378 1401287 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:51.225939 1401287 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:29:51.228192 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:29:51.229798 1401287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:29:51.229833 1401287 node_conditions.go:123] node cpu capacity is 8
	I0926 22:29:51.229856 1401287 node_conditions.go:105] duration metric: took 5.863751ms to run NodePressure ...
	I0926 22:29:51.229880 1401287 start.go:241] waiting for startup goroutines ...
	I0926 22:29:51.234026 1401287 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:29:51.234047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.241936 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.243854 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.552700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:51.709711 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.234742831s)
	W0926 22:29:51.709760 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.709786 1401287 retry.go:31] will retry after 268.370333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.732520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.738383 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.739361 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.978851 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:52.231665 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.237879 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.238844 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:52.731592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.738117 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.739055 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.232517 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.237333 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.239471 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.731711 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.737791 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.738851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.244329 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691529274s)
	I0926 22:29:54.244428 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.26554658s)
	W0926 22:29:54.244461 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.244491 1401287 retry.go:31] will retry after 392.451192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.303455 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.303472 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.303697 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.637695 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.732408 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.737348 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.738840 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:55.209616 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.209647 1401287 retry.go:31] will retry after 748.885115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.232030 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.238153 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.239111 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.331196 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:55.331261 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.348751 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.457803 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:29:55.479373 1401287 addons.go:238] Setting addon gcp-auth=true in "addons-619347"
	I0926 22:29:55.479441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:55.479850 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:55.499515 1401287 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:29:55.499611 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.520325 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.618144 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:55.619415 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:29:55.621107 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:29:55.621131 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:29:55.643383 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:29:55.643405 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:29:55.664765 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.664789 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:29:55.685778 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.732904 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.737583 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.739755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.958754 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:56.145831 1401287 addons.go:479] Verifying addon gcp-auth=true in "addons-619347"
	I0926 22:29:56.147565 1401287 out.go:179] * Verifying gcp-auth addon...
	I0926 22:29:56.149656 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:29:56.153451 1401287 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:29:56.153473 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.234575 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:56.238524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.240547 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.753812 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.754009 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.754105 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.754175 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:29:56.846438 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:56.846489 1401287 retry.go:31] will retry after 1.306898572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:57.154380 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.257757 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:57.257867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.257914 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.653373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.731799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.738612 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.739139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.153929 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:58.154158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.231698 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.238196 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.239871 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.653423 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.732047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.737700 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.739381 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:58.876131 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:58.876169 1401287 retry.go:31] will retry after 1.510195391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:59.153627 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.231973 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.237626 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.239442 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:59.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.732199 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.737381 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.739318 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.154349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.234946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.237553 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.238970 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.387250 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.653371 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.754562 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.754718 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.754737 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:01.142390 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.142433 1401287 retry.go:31] will retry after 2.823589735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.153470 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.231864 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.238191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.238929 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:01.653817 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.732601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.738292 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.738765 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.153510 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.232061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:02.237606 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.239333 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.653691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.785100 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.785181 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.785282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.228531 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.231398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.237322 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.239087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.653658 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.754788 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.754892 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.754903 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.966722 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.154061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:04.232281 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.240238 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.653129 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:04.657965 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.657997 1401287 retry.go:31] will retry after 3.931075545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.732441 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.738568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.739156 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.153676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.231619 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.237952 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.238902 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.653858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.732363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.737932 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.739708 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.153005 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.232588 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.238508 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.238930 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.653625 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.732133 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.737660 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.739398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.153662 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.231544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.238376 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.238896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.653623 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.732168 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.737693 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.739572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.153679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.231882 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.237268 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.239112 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.589607 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:08.653128 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.732858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.737867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.739211 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.153590 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.232224 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.237615 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.239714 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:09.284897 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.284936 1401287 retry.go:31] will retry after 5.203674911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.653321 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.731879 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.737435 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.739163 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.153976 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.232225 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.237891 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.239799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.652648 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.732289 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.740552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.740620 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.153709 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.231772 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.237915 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.238911 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.653574 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.731464 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.737883 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.738742 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.154161 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.255109 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.255143 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.255266 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.653341 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.732278 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.737987 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.739675 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.152601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.231735 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.238458 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.238993 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.653963 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.732677 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.737942 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.738815 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.153349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.231707 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.238128 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.238724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.489029 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.654034 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.755687 1401287 kapi.go:107] duration metric: took 24.519261155s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:14.755725 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.755952 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.152792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:15.222551 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.222596 1401287 retry.go:31] will retry after 5.506436948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.231403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.237852 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.662260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.731552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.738097 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.154099 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.231851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.237284 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.653593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.732118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.737657 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.153191 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.232638 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.238260 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.654087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.732572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.737869 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.153497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.231724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.237938 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.653474 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.754180 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.754664 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.153672 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.231937 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.237429 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.653500 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.732332 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.737902 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.153193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.231558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.238229 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.653596 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.729807 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:20.755463 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.755497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.156185 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.232540 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.237339 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:21.506242 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.506283 1401287 retry.go:31] will retry after 16.573257161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.653673 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.746511 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.747024 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.154193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.255191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.255336 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.653679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.732249 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.765524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.153260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.232592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.237546 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.653954 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.732247 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.738249 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.153348 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.231679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.238206 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.653640 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.754172 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.754291 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.155071 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.232312 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.237762 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.654098 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.755772 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.756117 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.153020 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.232253 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.237493 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.653784 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.731755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.738149 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.153957 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.231912 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.237304 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.740418 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.740422 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.740489 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.153035 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.232351 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.253652 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.653198 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.732594 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.738617 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.153818 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.255363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.255402 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.653377 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.795403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.795568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.154437 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.255203 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.255255 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.654322 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.731875 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.738025 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.153152 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.232403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.686139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.732196 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.737642 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.153176 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.232567 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.238193 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.653520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.731607 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.738120 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.153329 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.231836 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.238090 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.653138 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.753505 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.753695 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.153545 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.232120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.237425 1401287 kapi.go:107] duration metric: took 44.002941806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:30:34.654015 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.732058 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.153560 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.232023 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.653149 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.733392 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.195661 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.294162 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.653726 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.732044 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.153456 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.231729 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.653114 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.732251 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.080636 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:38.154372 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.231375 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.653809 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.782691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:30:38.852949 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.852986 1401287 retry.go:31] will retry after 15.881899723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:39.153131 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.232352 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.653465 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.731259 1401287 kapi.go:107] duration metric: took 48.503064069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:30:40.153304 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.652405 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.153555 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.652676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.152544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.653090 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.153739 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.652905 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.153461 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.653397 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.652913 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.153414 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.652678 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.153158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.653282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.152600 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.652859 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.153593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.652792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.152790 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.652641 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.153977 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.653558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.153042 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.653062 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.153284 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.653232 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.153389 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.653118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.735407 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:55.153085 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:55.342933 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.342967 1401287 retry.go:31] will retry after 26.788650375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.653379 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.653069 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.153833 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.153701 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.653075 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.153896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.652981 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.152946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.653566 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.152984 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.653887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.153373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.654120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.153468 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.653248 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.153804 1401287 kapi.go:107] duration metric: took 1m8.004150077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:04.155559 1401287 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-619347 cluster.
	I0926 22:31:04.156826 1401287 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:04.158107 1401287 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:22.132659 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:22.704256 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0926 22:31:22.704391 1401287 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:22.706313 1401287 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, volcano, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0926 22:31:22.707981 1401287 addons.go:514] duration metric: took 1m34.923379678s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass volcano registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0926 22:31:22.708039 1401287 start.go:246] waiting for cluster config update ...
	I0926 22:31:22.708063 1401287 start.go:255] writing updated cluster config ...
	I0926 22:31:22.708371 1401287 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:22.712517 1401287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:22.716253 1401287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.720372 1401287 pod_ready.go:94] pod "coredns-66bc5c9577-qctdw" is "Ready"
	I0926 22:31:22.720398 1401287 pod_ready.go:86] duration metric: took 4.121653ms for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.722139 1401287 pod_ready.go:83] waiting for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.725796 1401287 pod_ready.go:94] pod "etcd-addons-619347" is "Ready"
	I0926 22:31:22.725814 1401287 pod_ready.go:86] duration metric: took 3.654877ms for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.727751 1401287 pod_ready.go:83] waiting for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.731230 1401287 pod_ready.go:94] pod "kube-apiserver-addons-619347" is "Ready"
	I0926 22:31:22.731252 1401287 pod_ready.go:86] duration metric: took 3.484052ms for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.733085 1401287 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.117180 1401287 pod_ready.go:94] pod "kube-controller-manager-addons-619347" is "Ready"
	I0926 22:31:23.117210 1401287 pod_ready.go:86] duration metric: took 384.107267ms for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.316538 1401287 pod_ready.go:83] waiting for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.716914 1401287 pod_ready.go:94] pod "kube-proxy-sdscg" is "Ready"
	I0926 22:31:23.716945 1401287 pod_ready.go:86] duration metric: took 400.37971ms for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.917057 1401287 pod_ready.go:83] waiting for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316600 1401287 pod_ready.go:94] pod "kube-scheduler-addons-619347" is "Ready"
	I0926 22:31:24.316631 1401287 pod_ready.go:86] duration metric: took 399.543309ms for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316645 1401287 pod_ready.go:40] duration metric: took 1.604097264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:24.363816 1401287 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:24.365720 1401287 out.go:179] * Done! kubectl is now configured to use "addons-619347" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 26 22:40:03 addons-619347 dockerd[1116]: time="2025-09-26T22:40:03.951061919Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:40:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:40:03Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 26 22:40:18 addons-619347 dockerd[1116]: time="2025-09-26T22:40:18.337533725Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:18 addons-619347 dockerd[1116]: time="2025-09-26T22:40:18.367998157Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:40:24 addons-619347 dockerd[1116]: time="2025-09-26T22:40:24.764931443Z" level=info msg="ignoring event" container=02ed02bee6d73780bddc06cb8b6a6b9f7bca62787f463b8a37a9607797b22ec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.679587150Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.706418819Z" level=info msg="ignoring event" container=8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.848919751Z" level=info msg="ignoring event" container=287d670c65c8c8a8873127a7df0f4d937218417b7d71e0eba154a8654e5c7081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:04 addons-619347 dockerd[1116]: time="2025-09-26T22:41:04.418844414Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:41:26 addons-619347 dockerd[1116]: time="2025-09-26T22:41:26.400897955Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.267973692Z" level=info msg="ignoring event" container=4cc3707d46bf88e845610518f12c4e221232481beebc0d306be041784d9eeef9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.270029745Z" level=info msg="ignoring event" container=d91e1c6dab5ecd42dea5bb5729717ec362472d147203946ae302bbc7255f135a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.435356683Z" level=info msg="ignoring event" container=3dd9df25ed9e8ddc96d80cc8c16909501f9ba357934347571c9cf4791f330171 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.443346495Z" level=info msg="ignoring event" container=f5d5e2661efee4aea7ebf200f10bf32559fd2d948ee3d166fd0126357d613c32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.914633704Z" level=info msg="ignoring event" container=e37820c539b12eda02c49ae41d1aac4d954ca2e1a54cb41c5290533c5f905855 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.925827045Z" level=info msg="ignoring event" container=931b17716c09bcabb0e43095916c22974d139825f6f9f312ed35d3449c26c4fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.948762905Z" level=info msg="ignoring event" container=7d61fc01cddfdf3e93b8ed7c6d7b058941466b8e145d20a3a24e0dcd1baf2751 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.949155666Z" level=info msg="ignoring event" container=1dafa88bf03ffa52e0af7df8bcf10182168bee4fe10645ab954bedeb95f5038a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.951644355Z" level=info msg="ignoring event" container=2272adc16d5b82c96c472baf4f596717fcf44ff16d46d05b740fba8c19417b8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.957756734Z" level=info msg="ignoring event" container=ce8cf08b141fd7accb026fb16f373866103a37c2cf2136e05fbb43114f2ad79c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.958956971Z" level=info msg="ignoring event" container=7dcfe799d3773e9723224f5598253e916b9a18ef4ebef685c1f541ee3316b4ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:47 addons-619347 dockerd[1116]: time="2025-09-26T22:41:47.960054756Z" level=info msg="ignoring event" container=4830d4a0f03bf7a8ceaa6b4835c6cb8b6f782347cae9c2a00129320775fe268f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:48 addons-619347 dockerd[1116]: time="2025-09-26T22:41:48.084078394Z" level=info msg="ignoring event" container=84987d9e1a070118b0bfb942ec96367365f9945c42288cdc4401eaa88a403caf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:48 addons-619347 dockerd[1116]: time="2025-09-26T22:41:48.124764617Z" level=info msg="ignoring event" container=de2617410b653ae2df54c8195c9176d2cf82aa1585ce73e296a978b3796ced20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:48 addons-619347 dockerd[1116]: time="2025-09-26T22:41:48.137614804Z" level=info msg="ignoring event" container=b9ce75482df7adb40e1e37f9e9c8aa5189b96384cd17e7c360401d04e03f0b8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68f3619046214       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   a60bbae2dab32       busybox
	728e0cf65646d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             12 minutes ago      Running             controller                0                   db768efcc91e0       ingress-nginx-controller-9cc49f96f-ghq9n
	2be186df9d067       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   13 minutes ago      Exited              patch                     0                   e4cb125881f09       ingress-nginx-admission-patch-65dgz
	64e745dd36107       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   13 minutes ago      Exited              create                    0                   8c9898018e8fa       ingress-nginx-admission-create-dbtd8
	a6d48b6dd738f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            13 minutes ago      Running             gadget                    0                   1e350b656bd65       gadget-9rfhl
	6c95150654506       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         13 minutes ago      Running             minikube-ingress-dns      0                   8f1cf5e8da338       kube-ingress-dns-minikube
	d9822a41079f6       6e38f40d628db                                                                                                                13 minutes ago      Running             storage-provisioner       0                   e7dd4d41d742b       storage-provisioner
	9ea233eb6b299       52546a367cc9e                                                                                                                13 minutes ago      Running             coredns                   0                   3dff0fbc29922       coredns-66bc5c9577-qctdw
	227d066a100ce       df0860106674d                                                                                                                13 minutes ago      Running             kube-proxy                0                   9cd7f6237aa02       kube-proxy-sdscg
	f5b2050f68de5       a0af72f2ec6d6                                                                                                                13 minutes ago      Running             kube-controller-manager   0                   fbe20fd4325ef       kube-controller-manager-addons-619347
	8209664c099ee       46169d968e920                                                                                                                13 minutes ago      Running             kube-scheduler            0                   779a1e971ca62       kube-scheduler-addons-619347
	9d1b130b03b02       90550c43ad2bc                                                                                                                13 minutes ago      Running             kube-apiserver            0                   f2516f75f5542       kube-apiserver-addons-619347
	5ae0da6e5bfbf       5f1f5298c888d                                                                                                                13 minutes ago      Running             etcd                      0                   fa78f9e958055       etcd-addons-619347
	
	
	==> controller_ingress [728e0cf65646] <==
	I0926 22:30:36.082933       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0926 22:30:36.083217       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0926 22:30:36.083836       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:30:36.090331       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0926 22:30:36.090382       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-ghq9n"
	I0926 22:30:36.093730       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	I0926 22:30:36.132936       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:30:36.133071       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0926 22:30:36.133167       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0926 22:30:36.196634       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	W0926 22:35:31.376780       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0926 22:35:31.378855       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0926 22:35:31.381841       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0926 22:35:31.382122       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:32.307592       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:32.308416       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:35:32.351887       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:35:32.352086       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0926 22:35:35.641811       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:36.097517       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0926 22:35:36.101827       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2279", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:38.975235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0926 22:35:42.307868       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0926 22:41:47.833538       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0926 22:41:51.167110       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [9ea233eb6b29] <==
	[INFO] 10.244.0.8:44508 - 52121 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000116925s
	[INFO] 10.244.0.8:40588 - 27017 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090303s
	[INFO] 10.244.0.8:40588 - 26695 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000142751s
	[INFO] 10.244.0.8:32780 - 27322 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000091822s
	[INFO] 10.244.0.8:32780 - 26988 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130714s
	[INFO] 10.244.0.8:34268 - 17213 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132338s
	[INFO] 10.244.0.8:34268 - 16970 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009144s
	[INFO] 10.244.0.27:32935 - 45410 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000327431s
	[INFO] 10.244.0.27:49406 - 23181 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000412135s
	[INFO] 10.244.0.27:42691 - 10663 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129221s
	[INFO] 10.244.0.27:49167 - 28887 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157287s
	[INFO] 10.244.0.27:40544 - 36384 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160696s
	[INFO] 10.244.0.27:45145 - 3022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123636s
	[INFO] 10.244.0.27:57336 - 33875 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00499531s
	[INFO] 10.244.0.27:41391 - 16202 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005792959s
	[INFO] 10.244.0.27:59854 - 59303 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005004398s
	[INFO] 10.244.0.27:34824 - 56259 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005925015s
	[INFO] 10.244.0.27:36869 - 29305 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004734879s
	[INFO] 10.244.0.27:45437 - 987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00498032s
	[INFO] 10.244.0.27:47010 - 60828 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005607554s
	[INFO] 10.244.0.27:46662 - 45152 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007088447s
	[INFO] 10.244.0.27:60306 - 17345 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000925116s
	[INFO] 10.244.0.27:50259 - 39178 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001983867s
	[INFO] 10.244.0.32:60236 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345556s
	[INFO] 10.244.0.32:41492 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202476s
	
	
	==> describe nodes <==
	Name:               addons-619347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-619347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-619347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-619347
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-619347
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-619347
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 0728f6ac4f7f4421b7f9eeb1f21a8502
	  System UUID:                bfe74e22-ee1d-47b3-9c54-c1f6ef287d9d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  gadget                      gadget-9rfhl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-ghq9n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-qctdw                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-addons-619347                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-619347                250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-619347       200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-sdscg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-619347                100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node addons-619347 event: Registered Node addons-619347 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 b0 60 62 2a 0f 08 06
	[  +2.140079] IPv4: martian source 10.244.0.8 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.023792] IPv4: martian source 10.244.0.8 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[  +1.345643] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	
	
	==> etcd [5ae0da6e5bfb] <==
	{"level":"warn","ts":"2025-09-26T22:29:51.882030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:56.750352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.699546ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:29:56.750520Z","caller":"traceutil/trace.go:172","msg":"trace[545120805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"124.835239ms","start":"2025-09-26T22:29:56.625613Z","end":"2025-09-26T22:29:56.750448Z","steps":["trace[545120805] 'range keys from in-memory index tree'  (duration: 124.657379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:29:56.750545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.950667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040237519390281 > lease_revoke:<id:70cc9988256cc037>","response":"size:29"}
	{"level":"info","ts":"2025-09-26T22:29:56.750622Z","caller":"traceutil/trace.go:172","msg":"trace[550957012] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"110.947341ms","start":"2025-09-26T22:29:56.639663Z","end":"2025-09-26T22:29:56.750610Z","steps":["trace[550957012] 'read index received'  (duration: 40.919µs)","trace[550957012] 'applied index is now lower than readState.Index'  (duration: 110.905488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:29:56.750818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.149289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/gadget/gadget-role\" limit:1 ","response":"range_response_count:1 size:929"}
	{"level":"info","ts":"2025-09-26T22:29:56.750855Z","caller":"traceutil/trace.go:172","msg":"trace[1241908482] range","detail":"{range_begin:/registry/roles/gadget/gadget-role; range_end:; response_count:1; response_revision:1029; }","duration":"111.19346ms","start":"2025-09-26T22:29:56.639653Z","end":"2025-09-26T22:29:56.750846Z","steps":["trace[1241908482] 'agreement among raft nodes before linearized reading'  (duration: 111.040998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:11.365351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T22:30:11.365445Z","caller":"traceutil/trace.go:172","msg":"trace[2098668277] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1075; }","duration":"101.279805ms","start":"2025-09-26T22:30:11.264150Z","end":"2025-09-26T22:30:11.365430Z","steps":["trace[2098668277] 'range keys from in-memory index tree'  (duration: 101.005862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:16.821834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.870731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.885825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.893998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.925127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.935713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.946969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.959548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.971710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.978983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.988574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.999879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:30:33.500976Z","caller":"traceutil/trace.go:172","msg":"trace[381421690] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"106.589616ms","start":"2025-09-26T22:30:33.394366Z","end":"2025-09-26T22:30:33.500955Z","steps":["trace[381421690] 'process raft request'  (duration: 106.446725ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:39:38.933371Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1764}
	{"level":"info","ts":"2025-09-26T22:39:38.967116Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1764,"took":"33.118081ms","hash":3118520929,"current-db-size-bytes":9007104,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":6070272,"current-db-size-in-use":"6.1 MB"}
	{"level":"info","ts":"2025-09-26T22:39:38.967158Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3118520929,"revision":1764,"compact-revision":-1}
	
	
	==> kernel <==
	 22:43:33 up  4:25,  0 users,  load average: 1.72, 1.94, 1.76
	Linux addons-619347 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9d1b130b03b0] <==
	I0926 22:37:00.802682       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:18.863268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:08.393742       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:27.210526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:17.734882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:39.792776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:39:49.815861       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:38.099623       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:10.371771       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:41.275865       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:47.159903       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:41:47.159957       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:41:47.173535       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:41:47.173584       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:41:47.175082       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:41:47.175126       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:41:47.187944       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:41:47.188000       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:41:47.198718       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:41:47.198756       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0926 22:41:48.175095       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0926 22:41:48.199820       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0926 22:41:48.208594       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0926 22:42:32.053117       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:06.424246       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f5b2050f68de] <==
	E0926 22:42:53.701306       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:42:53.702331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:42:57.744450       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:42:57.745528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:42:59.590930       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:42:59.591975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:01.298450       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:01.299631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:01.786968       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0926 22:43:07.894856       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:07.895852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:08.643821       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:08.644809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:16.421359       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:16.422360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:16.787429       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0926 22:43:17.217992       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:17.218939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:17.240117       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:17.241142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:21.581276       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:21.582225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:24.369067       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:43:24.370173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:43:31.787983       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [227d066a100c] <==
	I0926 22:29:48.632051       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:29:48.823798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:48.926913       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:48.926974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:29:48.927216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:48.966553       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:29:48.966624       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:48.976081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:48.977337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:48.977360       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:48.983888       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:48.983916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:48.984026       1 config.go:200] "Starting service config controller"
	I0926 22:29:48.984052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:48.984116       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:48.984123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:48.987589       1 config.go:309] "Starting node config controller"
	I0926 22:29:48.987610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:48.987619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:49.084696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:29:49.084764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:49.085094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8209664c099e] <==
	E0926 22:29:39.815534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:39.815639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:39.815741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:39.815842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:39.815881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:39.815924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:39.815978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:39.816083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:39.816079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:39.816116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:39.816205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:29:39.816287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:39.816442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:39.816526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.634465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:40.655555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.682056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:40.739683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:40.750044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:40.783186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:40.869968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:40.950295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:41.003301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:41.010326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0926 22:29:41.412472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:41:48 addons-619347 kubelet[2321]: I0926 22:41:48.633656    2321 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2272adc16d5b82c96c472baf4f596717fcf44ff16d46d05b740fba8c19417b8c"} err="failed to get container status \"2272adc16d5b82c96c472baf4f596717fcf44ff16d46d05b740fba8c19417b8c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2272adc16d5b82c96c472baf4f596717fcf44ff16d46d05b740fba8c19417b8c"
	Sep 26 22:41:48 addons-619347 kubelet[2321]: I0926 22:41:48.758159    2321 csi_plugin.go:271] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io, endpoint /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Sep 26 22:41:50 addons-619347 kubelet[2321]: E0926 22:41:50.310910    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:41:50 addons-619347 kubelet[2321]: I0926 22:41:50.318728    2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d50f2b9-3baf-40f4-8cca-e0515fb6c3aa" path="/var/lib/kubelet/pods/1d50f2b9-3baf-40f4-8cca-e0515fb6c3aa/volumes"
	Sep 26 22:41:50 addons-619347 kubelet[2321]: I0926 22:41:50.319195    2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1ea325-7572-4e6e-bace-6a0751278f1d" path="/var/lib/kubelet/pods/7d1ea325-7572-4e6e-bace-6a0751278f1d/volumes"
	Sep 26 22:41:50 addons-619347 kubelet[2321]: I0926 22:41:50.319661    2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5674c4-190f-4e47-b63d-ac3f9558f139" path="/var/lib/kubelet/pods/cf5674c4-190f-4e47-b63d-ac3f9558f139/volumes"
	Sep 26 22:41:59 addons-619347 kubelet[2321]: E0926 22:41:59.311801    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:42:03 addons-619347 kubelet[2321]: W0926 22:42:03.055913    2321 logging.go:55] [core] [Channel #68 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 26 22:42:04 addons-619347 kubelet[2321]: E0926 22:42:04.310645    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:42:10 addons-619347 kubelet[2321]: E0926 22:42:10.312068    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:42:16 addons-619347 kubelet[2321]: E0926 22:42:16.310664    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:42:22 addons-619347 kubelet[2321]: E0926 22:42:22.312979    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:42:27 addons-619347 kubelet[2321]: I0926 22:42:27.310239    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:42:27 addons-619347 kubelet[2321]: E0926 22:42:27.310336    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:42:37 addons-619347 kubelet[2321]: E0926 22:42:37.311968    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:42:41 addons-619347 kubelet[2321]: E0926 22:42:41.310394    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:42:50 addons-619347 kubelet[2321]: E0926 22:42:50.312281    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:42:52 addons-619347 kubelet[2321]: E0926 22:42:52.311817    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:43:03 addons-619347 kubelet[2321]: E0926 22:43:03.310106    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:43:04 addons-619347 kubelet[2321]: E0926 22:43:04.312182    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:43:17 addons-619347 kubelet[2321]: E0926 22:43:17.309904    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:43:19 addons-619347 kubelet[2321]: E0926 22:43:19.312280    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:43:24 addons-619347 kubelet[2321]: W0926 22:43:24.763452    2321 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 26 22:43:28 addons-619347 kubelet[2321]: E0926 22:43:28.311163    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:43:30 addons-619347 kubelet[2321]: E0926 22:43:30.312012    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	
	
	==> storage-provisioner [d9822a41079f] <==
	W0926 22:43:07.646138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:09.649139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:09.654076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:11.657714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:11.661591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:13.666301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:13.670072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:15.672982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:15.676658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:17.679522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:17.684365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:19.687432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:19.691140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:21.693867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:21.697116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:23.700839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:23.704441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:25.707773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:25.712896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:27.716277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:27.720275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:29.722866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:29.726644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:31.730372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:31.734168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
helpers_test.go:269: (dbg) Run:  kubectl --context addons-619347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz: exit status 1 (78.710467ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq742 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jq742:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-619347
	  Normal   Pulling    5m19s (x5 over 8m1s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m19s (x5 over 8m1s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m19s (x5 over 8m1s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m59s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m59s (x21 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xgkr7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xgkr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  7m49s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-619347
	  Warning  Failed     7m48s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m51s (x5 over 7m49s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m51s (x5 over 7m48s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x4 over 7m33s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m46s (x21 over 7m48s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m46s (x21 over 7m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfq8j (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sfq8j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dbtd8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-65dgz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable ingress --alsologtostderr -v=1: (7.606135245s)
--- FAIL: TestAddons/parallel/Ingress (491.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (385.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 22:35:28.167395 1399974 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 22:35:28.171339 1399974 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 22:35:28.171369 1399974 kapi.go:107] duration metric: took 3.99401ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.007289ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-619347 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-619347 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e055794c-8563-455d-956f-81e9b7627d09] Pending
helpers_test.go:352: "task-pv-pod" [e055794c-8563-455d-956f-81e9b7627d09] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-26 22:41:44.766164619 +0000 UTC m=+774.722147556
addons_test.go:567: (dbg) Run:  kubectl --context addons-619347 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-619347 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-619347/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:35:44 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.34
IPs:
IP:  10.244.0.34
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xgkr7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-xgkr7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-619347
Warning  Failed     5m59s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m2s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m2s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     3m2s (x4 over 5m44s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    57s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     57s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-619347 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-619347 logs task-pv-pod -n default: exit status 1 (72.244345ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-619347 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-619347
helpers_test.go:243: (dbg) docker inspect addons-619347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	        "Created": "2025-09-26T22:29:24.504112175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1401920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:29:24.53667075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hosts",
	        "LogPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d-json.log",
	        "Name": "/addons-619347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-619347:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-619347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	                "LowerDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-619347",
	                "Source": "/var/lib/docker/volumes/addons-619347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-619347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-619347",
	                "name.minikube.sigs.k8s.io": "addons-619347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3015286d67af8b7391959f3121ca363feb45d14fa55ccdc7193de806e7fe6e96",
	            "SandboxKey": "/var/run/docker/netns/3015286d67af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-619347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:cd:cb:d7:a7:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22f06ef7f1b3d4919d623039fdb7eaef892f9c8c0a7074ff47e8c48934f6f117",
	                    "EndpointID": "4b693477b2120ec160d127bc2bc90fabb016ebf45c34df1cad9bd2399ffdc1cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-619347",
	                        "f0caa77a5878"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-619347 -n addons-619347
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 logs -n 25
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-036757                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-036757   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ --download-only -p download-docker-193843 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ -p download-docker-193843                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-237584 --alsologtostderr --binary-mirror http://127.0.0.1:35911 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-237584                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ disable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-619347 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ addons  │ addons-619347 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ enable headlamp -p addons-619347 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ ip      │ addons-619347 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                             │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:40 UTC │ 26 Sep 25 22:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:01.756585 1401287 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:01.756707 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756717 1401287 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:01.756724 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756944 1401287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:29:01.757503 1401287 out.go:368] Setting JSON to false
	I0926 22:29:01.758423 1401287 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15086,"bootTime":1758910656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:01.758529 1401287 start.go:140] virtualization: kvm guest
	I0926 22:29:01.760350 1401287 out.go:179] * [addons-619347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:01.761510 1401287 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:01.761513 1401287 notify.go:220] Checking for updates...
	I0926 22:29:01.763728 1401287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:01.765716 1401287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:01.766946 1401287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:29:01.767993 1401287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:01.768984 1401287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:01.770171 1401287 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:01.792688 1401287 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:01.792779 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.845164 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.835526355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.845273 1401287 docker.go:318] overlay module found
	I0926 22:29:01.847734 1401287 out.go:179] * Using the docker driver based on user configuration
	I0926 22:29:01.848892 1401287 start.go:304] selected driver: docker
	I0926 22:29:01.848910 1401287 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:01.848922 1401287 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:01.849577 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.899952 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.890671576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.900135 1401287 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:01.900371 1401287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:01.902046 1401287 out.go:179] * Using Docker driver with root privileges
	I0926 22:29:01.903097 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:01.903175 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:01.903186 1401287 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:01.903270 1401287 start.go:348] cluster config:
	{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0926 22:29:01.904858 1401287 out.go:179] * Starting "addons-619347" primary control-plane node in "addons-619347" cluster
	I0926 22:29:01.906044 1401287 cache.go:123] Beginning downloading kic base image for docker with docker
	I0926 22:29:01.907356 1401287 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:29:01.908297 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:01.908335 1401287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0926 22:29:01.908345 1401287 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:01.908416 1401287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:01.908443 1401287 preload.go:172] Found /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:01.908453 1401287 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0926 22:29:01.908843 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:01.908883 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json: {Name:mkc2865f84bd589b8eae2eb83eded5267684d61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:01.925224 1401287 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:01.925402 1401287 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:01.925420 1401287 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:01.925428 1401287 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:01.925435 1401287 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:01.925439 1401287 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0926 22:29:14.155592 1401287 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0926 22:29:14.155633 1401287 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:29:14.155712 1401287 start.go:360] acquireMachinesLock for addons-619347: {Name:mk16a13d35eefb90d37e67ab9d542372a6292c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:14.155829 1401287 start.go:364] duration metric: took 91.725µs to acquireMachinesLock for "addons-619347"
	I0926 22:29:14.155856 1401287 start.go:93] Provisioning new machine with config: &{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:14.155980 1401287 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:29:14.157562 1401287 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0926 22:29:14.157823 1401287 start.go:159] libmachine.API.Create for "addons-619347" (driver="docker")
	I0926 22:29:14.157858 1401287 client.go:168] LocalClient.Create starting
	I0926 22:29:14.158021 1401287 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem
	I0926 22:29:14.205932 1401287 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem
	I0926 22:29:14.366294 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:29:14.383620 1401287 cli_runner.go:211] docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:29:14.383691 1401287 network_create.go:284] running [docker network inspect addons-619347] to gather additional debugging logs...
	I0926 22:29:14.383716 1401287 cli_runner.go:164] Run: docker network inspect addons-619347
	W0926 22:29:14.399817 1401287 cli_runner.go:211] docker network inspect addons-619347 returned with exit code 1
	I0926 22:29:14.399876 1401287 network_create.go:287] error running [docker network inspect addons-619347]: docker network inspect addons-619347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-619347 not found
	I0926 22:29:14.399898 1401287 network_create.go:289] output of [docker network inspect addons-619347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-619347 not found
	
	** /stderr **
	I0926 22:29:14.400043 1401287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:14.417291 1401287 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ae9be0}
	I0926 22:29:14.417339 1401287 network_create.go:124] attempt to create docker network addons-619347 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:29:14.417382 1401287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-619347 addons-619347
	I0926 22:29:14.473127 1401287 network_create.go:108] docker network addons-619347 192.168.49.0/24 created
	I0926 22:29:14.473163 1401287 kic.go:121] calculated static IP "192.168.49.2" for the "addons-619347" container
	I0926 22:29:14.473252 1401287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:29:14.489293 1401287 cli_runner.go:164] Run: docker volume create addons-619347 --label name.minikube.sigs.k8s.io=addons-619347 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:29:14.506092 1401287 oci.go:103] Successfully created a docker volume addons-619347
	I0926 22:29:14.506161 1401287 cli_runner.go:164] Run: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:29:20.841341 1401287 cli_runner.go:217] Completed: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.335139978s)
	I0926 22:29:20.841369 1401287 oci.go:107] Successfully prepared a docker volume addons-619347
	I0926 22:29:20.841406 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:20.841430 1401287 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:29:20.841514 1401287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:29:24.436467 1401287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.594814262s)
	I0926 22:29:24.436527 1401287 kic.go:203] duration metric: took 3.595091279s to extract preloaded images to volume ...
	W0926 22:29:24.436629 1401287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:29:24.436675 1401287 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:29:24.436720 1401287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:29:24.488860 1401287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-619347 --name addons-619347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-619347 --network addons-619347 --ip 192.168.49.2 --volume addons-619347:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:29:24.739034 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Running}}
	I0926 22:29:24.756901 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:24.774535 1401287 cli_runner.go:164] Run: docker exec addons-619347 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:29:24.821732 1401287 oci.go:144] the created container "addons-619347" has a running status.
	I0926 22:29:24.821762 1401287 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa...
	I0926 22:29:25.058873 1401287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:29:25.084720 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.103222 1401287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:29:25.103256 1401287 kic_runner.go:114] Args: [docker exec --privileged addons-619347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:29:25.152057 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.171032 1401287 machine.go:93] provisionDockerMachine start ...
	I0926 22:29:25.171165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.192356 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.192770 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.192789 1401287 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:29:25.329327 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.329360 1401287 ubuntu.go:182] provisioning hostname "addons-619347"
	I0926 22:29:25.329440 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.347623 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.347852 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.347866 1401287 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-619347 && echo "addons-619347" | sudo tee /etc/hostname
	I0926 22:29:25.495671 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.495764 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.513361 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.513676 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.513706 1401287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-619347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-619347/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-619347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.648127 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.648158 1401287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-1396392/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-1396392/.minikube}
	I0926 22:29:25.648181 1401287 ubuntu.go:190] setting up certificates
	I0926 22:29:25.648194 1401287 provision.go:84] configureAuth start
	I0926 22:29:25.648256 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:25.665581 1401287 provision.go:143] copyHostCerts
	I0926 22:29:25.665655 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.665964 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.666216 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.666332 1401287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem org=jenkins.addons-619347 san=[127.0.0.1 192.168.49.2 addons-619347 localhost minikube]
	I0926 22:29:26.345521 1401287 provision.go:177] copyRemoteCerts
	I0926 22:29:26.345589 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:26.345626 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.363376 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:26.461182 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:26.487057 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:26.511222 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 22:29:26.535844 1401287 provision.go:87] duration metric: took 887.635192ms to configureAuth
	I0926 22:29:26.535878 1401287 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:29:26.536095 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:26.536165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.554135 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.554419 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.554438 1401287 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 22:29:26.690395 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0926 22:29:26.690420 1401287 ubuntu.go:71] root file system type: overlay
	I0926 22:29:26.690565 1401287 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 22:29:26.690630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.708389 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.708653 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.708753 1401287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 22:29:26.857459 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 22:29:26.857566 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.875261 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.875543 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.875567 1401287 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 22:29:27.972927 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-26 22:29:26.855075288 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0926 22:29:27.972953 1401287 machine.go:96] duration metric: took 2.801887579s to provisionDockerMachine
	I0926 22:29:27.972966 1401287 client.go:171] duration metric: took 13.815098068s to LocalClient.Create
	I0926 22:29:27.972989 1401287 start.go:167] duration metric: took 13.815166582s to libmachine.API.Create "addons-619347"
	I0926 22:29:27.972999 1401287 start.go:293] postStartSetup for "addons-619347" (driver="docker")
	I0926 22:29:27.973014 1401287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:27.973075 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:27.973123 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:27.990436 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.088898 1401287 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:28.092357 1401287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:29:28.092381 1401287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:29:28.092389 1401287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:29:28.092397 1401287 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:29:28.092411 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/addons for local assets ...
	I0926 22:29:28.092496 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/files for local assets ...
	I0926 22:29:28.092533 1401287 start.go:296] duration metric: took 119.526658ms for postStartSetup
	I0926 22:29:28.092888 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.110347 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:28.110666 1401287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:29:28.110720 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.127963 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.219507 1401287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:29:28.223820 1401287 start.go:128] duration metric: took 14.067824148s to createHost
	I0926 22:29:28.223850 1401287 start.go:83] releasing machines lock for "addons-619347", held for 14.068007272s
	I0926 22:29:28.223922 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.240598 1401287 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:28.240633 1401287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:28.240652 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.240703 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.257372 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.258797 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.423810 1401287 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:28.428533 1401287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:29:28.433038 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 22:29:28.461936 1401287 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:29:28.462028 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:28.488392 1401287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:29:28.488420 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.488455 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.488593 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.505081 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 22:29:28.516249 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 22:29:28.526291 1401287 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 22:29:28.526353 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 22:29:28.536220 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.546282 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 22:29:28.556108 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.565920 1401287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:28.575000 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 22:29:28.584684 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 22:29:28.594441 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 22:29:28.604436 1401287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:28.612926 1401287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:28.621307 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:28.686706 1401287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 22:29:28.765771 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.765825 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.765881 1401287 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 22:29:28.778235 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.789193 1401287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:28.806369 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.817718 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 22:29:28.828841 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.845391 1401287 ssh_runner.go:195] Run: which cri-dockerd
	I0926 22:29:28.848841 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 22:29:28.859051 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0926 22:29:28.876661 1401287 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 22:29:28.939711 1401287 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 22:29:29.006868 1401287 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0926 22:29:29.007006 1401287 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0926 22:29:29.025882 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0926 22:29:29.037344 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:29.102031 1401287 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 22:29:29.866941 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:29.878676 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 22:29:29.890349 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:29.901859 1401287 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 22:29:29.971712 1401287 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 22:29:30.041653 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.108440 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 22:29:30.127589 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0926 22:29:30.138450 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.204543 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 22:29:30.280240 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:30.292074 1401287 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 22:29:30.292147 1401287 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 22:29:30.295851 1401287 start.go:563] Will wait 60s for crictl version
	I0926 22:29:30.295920 1401287 ssh_runner.go:195] Run: which crictl
	I0926 22:29:30.299332 1401287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:30.334344 1401287 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0926 22:29:30.334407 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.359394 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.385840 1401287 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0926 22:29:30.385911 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:30.402657 1401287 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:30.406689 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.418124 1401287 kubeadm.go:883] updating cluster {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:30.418244 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:30.418289 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.437981 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.438007 1401287 docker.go:621] Images already preloaded, skipping extraction
	I0926 22:29:30.438061 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.457379 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.457402 1401287 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:30.457415 1401287 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0926 22:29:30.457550 1401287 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-619347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:30.457608 1401287 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 22:29:30.507568 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:30.507618 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:30.507640 1401287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:30.507666 1401287 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-619347 NodeName:addons-619347 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:30.507817 1401287 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-619347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:30.507878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:30.517618 1401287 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:30.517680 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:30.526766 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:30.544641 1401287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:30.561976 1401287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:30.579430 1401287 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:30.582806 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.593536 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.659215 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:30.680701 1401287 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347 for IP: 192.168.49.2
	I0926 22:29:30.680722 1401287 certs.go:195] generating shared ca certs ...
	I0926 22:29:30.680743 1401287 certs.go:227] acquiring lock for ca certs: {Name:mk6c7838cc2dce82903d545772166c35f6a8ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.680859 1401287 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key
	I0926 22:29:30.837572 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt ...
	I0926 22:29:30.837605 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt: {Name:mka8a7fba6c323e3efb5c337a110d874f4a069f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837797 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key ...
	I0926 22:29:30.837813 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key: {Name:mk5241bded4d58e8d730b5c39e3cb6b761b06b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837926 1401287 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key
	I0926 22:29:31.379026 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt ...
	I0926 22:29:31.379062 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt: {Name:mk0b26827e7effdc6e0cb418dab9aa237c23935e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379267 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key ...
	I0926 22:29:31.379283 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key: {Name:mkc17ee61ac662bf18733fd6087e23ac2b546ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379447 1401287 certs.go:257] generating profile certs ...
	I0926 22:29:31.379550 1401287 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key
	I0926 22:29:31.379571 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt with IP's: []
	I0926 22:29:31.863291 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt ...
	I0926 22:29:31.863331 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: {Name:mk25ddefd62aaf8d3e2f6d1fd2d519d1c2b1bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863552 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key ...
	I0926 22:29:31.863571 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key: {Name:mk8cc05aa8f2753617dfe3d2ae365690c5c6ce86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863711 1401287 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15
	I0926 22:29:31.863742 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:29:32.476987 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 ...
	I0926 22:29:32.477026 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15: {Name:mkd972c04e4a2418d910fa6a476af654883d90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477231 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 ...
	I0926 22:29:32.477251 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15: {Name:mk6e7ebd8b361ff43396ae1d43e26cc4b3fca9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477363 1401287 certs.go:382] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt
	I0926 22:29:32.477503 1401287 certs.go:386] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key
	I0926 22:29:32.477596 1401287 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key
	I0926 22:29:32.477626 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt with IP's: []
	I0926 22:29:32.537971 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt ...
	I0926 22:29:32.538009 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt: {Name:mkfbd9d4d456b434b04760e6c3778ba177b5caa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538198 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key ...
	I0926 22:29:32.538217 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key: {Name:mkdbd77fea74f3adf740a694b7d5ff5142acf56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538432 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 22:29:32.538493 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:32.538542 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:32.538584 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:32.539249 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:32.564650 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:29:32.589199 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:32.612819 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:32.636809 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:32.660922 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:32.684674 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:32.708845 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:29:32.732866 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:32.759367 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:32.777459 1401287 ssh_runner.go:195] Run: openssl version
	I0926 22:29:32.783004 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:32.794673 1401287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798422 1401287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798497 1401287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.805099 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:32.814605 1401287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:32.817944 1401287 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:32.818016 1401287 kubeadm.go:400] StartCluster: {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:32.818116 1401287 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 22:29:32.836878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:32.846020 1401287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:32.855171 1401287 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:29:32.855233 1401287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:32.863903 1401287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:32.863919 1401287 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:32.863955 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:32.872442 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:32.872518 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:32.880882 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:32.889348 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:32.889394 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:32.897735 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.906508 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:32.906558 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.915447 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:32.924534 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:32.924590 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:32.933327 1401287 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:29:32.971243 1401287 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:32.971298 1401287 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:33.008888 1401287 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:29:33.009014 1401287 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:29:33.009067 1401287 kubeadm.go:318] OS: Linux
	I0926 22:29:33.009160 1401287 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:29:33.009217 1401287 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:29:33.009313 1401287 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:29:33.009388 1401287 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:29:33.009472 1401287 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:29:33.009577 1401287 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:29:33.009649 1401287 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:29:33.009739 1401287 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:29:33.064493 1401287 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:33.064612 1401287 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:33.064736 1401287 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:33.076202 1401287 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:33.078537 1401287 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:33.078633 1401287 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:33.078712 1401287 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:33.613982 1401287 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:34.132193 1401287 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:34.241294 1401287 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:34.638661 1401287 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:34.928444 1401287 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:34.928596 1401287 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.122701 1401287 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:35.122888 1401287 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.275604 1401287 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:35.549799 1401287 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:35.689108 1401287 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:35.689184 1401287 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:35.894121 1401287 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:36.122749 1401287 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:36.401681 1401287 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:36.449466 1401287 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:36.577737 1401287 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:36.578213 1401287 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:36.581892 1401287 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:36.583521 1401287 out.go:252]   - Booting up control plane ...
	I0926 22:29:36.583635 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:36.583735 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:36.584452 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:36.594025 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:36.594112 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:36.599591 1401287 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:36.599832 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:36.599913 1401287 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:36.682320 1401287 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:36.682523 1401287 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:37.683335 1401287 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001189529s
	I0926 22:29:37.687852 1401287 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:37.687994 1401287 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:29:37.688138 1401287 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:37.688267 1401287 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:38.693325 1401287 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005328653s
	I0926 22:29:39.818196 1401287 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.130304657s
	I0926 22:29:41.690178 1401287 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002189462s
	I0926 22:29:41.702527 1401287 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:41.711408 1401287 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:41.720193 1401287 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:41.720435 1401287 kubeadm.go:318] [mark-control-plane] Marking the node addons-619347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:41.727838 1401287 kubeadm.go:318] [bootstrap-token] Using token: ydwgpt.re3mhs2qr7yfu0od
	I0926 22:29:41.729412 1401287 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:41.729554 1401287 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:41.732328 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:41.737352 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:41.740726 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:41.743207 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:41.745363 1401287 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:42.096302 1401287 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:42.513166 1401287 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:43.094717 1401287 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:43.095522 1401287 kubeadm.go:318] 
	I0926 22:29:43.095627 1401287 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:43.095642 1401287 kubeadm.go:318] 
	I0926 22:29:43.095755 1401287 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:43.095774 1401287 kubeadm.go:318] 
	I0926 22:29:43.095814 1401287 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:43.095897 1401287 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:43.095977 1401287 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:43.095986 1401287 kubeadm.go:318] 
	I0926 22:29:43.096062 1401287 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:43.096071 1401287 kubeadm.go:318] 
	I0926 22:29:43.096135 1401287 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:43.096145 1401287 kubeadm.go:318] 
	I0926 22:29:43.096220 1401287 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:43.096324 1401287 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:43.096430 1401287 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:43.096455 1401287 kubeadm.go:318] 
	I0926 22:29:43.096638 1401287 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:43.096786 1401287 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:43.096798 1401287 kubeadm.go:318] 
	I0926 22:29:43.096919 1401287 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097088 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 \
	I0926 22:29:43.097115 1401287 kubeadm.go:318] 	--control-plane 
	I0926 22:29:43.097122 1401287 kubeadm.go:318] 
	I0926 22:29:43.097214 1401287 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:43.097228 1401287 kubeadm.go:318] 
	I0926 22:29:43.097348 1401287 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097470 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 
	I0926 22:29:43.099587 1401287 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:29:43.099739 1401287 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:43.099768 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:43.099788 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:43.101355 1401287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:43.102553 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:43.112120 1401287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:43.130674 1401287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:43.130768 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.130767 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-619347 minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-619347 minikube.k8s.io/primary=true
	I0926 22:29:43.138720 1401287 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:43.217942 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.718375 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.218391 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.718337 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.219035 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.719000 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.218689 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.718531 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.218333 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.718316 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.783783 1401287 kubeadm.go:1113] duration metric: took 4.653074895s to wait for elevateKubeSystemPrivileges
	I0926 22:29:47.783815 1401287 kubeadm.go:402] duration metric: took 14.965805729s to StartCluster
	I0926 22:29:47.783835 1401287 settings.go:142] acquiring lock: {Name:mk19bb20e8e2719c9f4ae7071ba1f293bea0c47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.783943 1401287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:47.784300 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/kubeconfig: {Name:mk53eccd4814679d9dd1f60d4b668d1b7f9967e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.784499 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:47.784532 1401287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:47.784609 1401287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:47.784681 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784735 1401287 addons.go:69] Setting registry=true in profile "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784755 1401287 addons.go:69] Setting storage-provisioner=true in profile "addons-619347"
	I0926 22:29:47.784760 1401287 addons.go:238] Setting addon registry=true in "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting registry-creds=true in profile "addons-619347"
	I0926 22:29:47.784770 1401287 addons.go:238] Setting addon storage-provisioner=true in "addons-619347"
	I0926 22:29:47.784775 1401287 addons.go:238] Setting addon registry-creds=true in "addons-619347"
	I0926 22:29:47.784785 1401287 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-619347"
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784811 1401287 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784804 1401287 addons.go:69] Setting inspektor-gadget=true in profile "addons-619347"
	I0926 22:29:47.784822 1401287 addons.go:69] Setting volumesnapshots=true in profile "addons-619347"
	I0926 22:29:47.784827 1401287 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-619347"
	I0926 22:29:47.784832 1401287 addons.go:238] Setting addon inspektor-gadget=true in "addons-619347"
	I0926 22:29:47.784833 1401287 addons.go:238] Setting addon volumesnapshots=true in "addons-619347"
	I0926 22:29:47.784844 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784849 1401287 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:47.784851 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784856 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784879 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784951 1401287 addons.go:69] Setting ingress-dns=true in profile "addons-619347"
	I0926 22:29:47.784970 1401287 addons.go:69] Setting default-storageclass=true in profile "addons-619347"
	I0926 22:29:47.784958 1401287 addons.go:69] Setting gcp-auth=true in profile "addons-619347"
	I0926 22:29:47.784988 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-619347"
	I0926 22:29:47.784817 1401287 addons.go:69] Setting volcano=true in profile "addons-619347"
	I0926 22:29:47.785003 1401287 addons.go:238] Setting addon volcano=true in "addons-619347"
	I0926 22:29:47.785032 1401287 addons.go:69] Setting cloud-spanner=true in profile "addons-619347"
	I0926 22:29:47.785040 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785045 1401287 addons.go:238] Setting addon cloud-spanner=true in "addons-619347"
	I0926 22:29:47.785065 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785262 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784800 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785379 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784973 1401287 addons.go:238] Setting addon ingress-dns=true in "addons-619347"
	I0926 22:29:47.785498 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785518 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785535 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785723 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785798 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785980 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784992 1401287 mustload.go:65] Loading cluster: addons-619347
	I0926 22:29:47.784762 1401287 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-619347"
	I0926 22:29:47.787331 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784792 1401287 addons.go:69] Setting metrics-server=true in profile "addons-619347"
	I0926 22:29:47.784734 1401287 addons.go:69] Setting yakd=true in profile "addons-619347"
	I0926 22:29:47.787078 1401287 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787824 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.788010 1401287 addons.go:238] Setting addon metrics-server=true in "addons-619347"
	I0926 22:29:47.788028 1401287 addons.go:238] Setting addon yakd=true in "addons-619347"
	I0926 22:29:47.788047 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.788063 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.789412 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787118 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784734 1401287 addons.go:69] Setting ingress=true in profile "addons-619347"
	I0926 22:29:47.789936 1401287 addons.go:238] Setting addon ingress=true in "addons-619347"
	I0926 22:29:47.789980 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784814 1401287 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-619347"
	I0926 22:29:47.790231 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-619347"
	I0926 22:29:47.790451 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:47.793232 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.793847 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.802421 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.803014 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.835418 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0926 22:29:47.836021 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:47.839393 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:47.839421 1401287 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:47.840142 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.845675 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0926 22:29:47.849257 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0926 22:29:47.856053 1401287 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:47.858820 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0926 22:29:47.856545 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:47.858894 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.860040 1401287 addons.go:238] Setting addon default-storageclass=true in "addons-619347"
	I0926 22:29:47.860081 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.860516 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:47.860534 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:47.860630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.866839 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.873854 1401287 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:47.875341 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:47.875365 1401287 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:47.875428 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.882655 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:47.882749 1401287 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:47.884700 1401287 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:47.885073 1401287 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:47.885418 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:47.885504 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.884703 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:47.887232 1401287 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:47.887315 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:47.887396 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.887247 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:47.889515 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:47.892008 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:47.893405 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:47.895131 1401287 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:47.896348 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:47.896370 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:47.896434 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.897311 1401287 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-619347"
	I0926 22:29:47.897358 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.898142 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.899126 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:47.900143 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:47.902104 1401287 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:47.902740 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:47.902755 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:47.902813 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.903595 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:47.903615 1401287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:47.903685 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.911178 1401287 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:47.912616 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:47.912637 1401287 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:47.912867 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.916927 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:47.918186 1401287 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:47.919909 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.920091 1401287 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:47.920106 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:47.920166 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.921441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.922745 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.923875 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:47.923890 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:47.923943 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.926937 1401287 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:47.927973 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:47.927993 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:47.928052 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.940536 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.942062 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0926 22:29:47.945122 1401287 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:47.946248 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:47.946273 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:47.946337 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.951570 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.958865 1401287 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:47.959859 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.960450 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.961755 1401287 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:47.965573 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:47.965594 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:47.965659 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.966411 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976561 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976622 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977107 1401287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:47.977106 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977119 1401287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:47.977177 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.980224 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.984609 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.989681 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.990796 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:47.997697 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.997795 1401287 retry.go:31] will retry after 178.321817ms: ssh: handshake failed: EOF
	W0926 22:29:47.999217 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.999256 1401287 retry.go:31] will retry after 245.552991ms: ssh: handshake failed: EOF
	I0926 22:29:48.009280 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.011073 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.018912 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:48.019331 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:48.022191 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.027290 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:48.029295 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:48.029402 1401287 retry.go:31] will retry after 284.652213ms: ssh: handshake failed: EOF
	I0926 22:29:48.076445 1401287 node_ready.go:35] waiting up to 6m0s for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081001 1401287 node_ready.go:49] node "addons-619347" is "Ready"
	I0926 22:29:48.081030 1401287 node_ready.go:38] duration metric: took 4.536047ms for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081059 1401287 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:48.081111 1401287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:48.140834 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.140859 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:48.162194 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:48.165548 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:48.168900 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:48.182428 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.188630 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:48.188700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:48.201257 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:48.201282 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:48.206272 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:48.206297 1401287 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:48.207662 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:48.207682 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:48.218223 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:48.220995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:48.226298 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:48.226321 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:48.226742 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:48.226761 1401287 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:48.262874 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:48.262908 1401287 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:48.275319 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:48.275353 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:48.291538 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:48.291571 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:48.310099 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.310124 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:48.326030 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:48.326056 1401287 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:48.326064 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:48.326081 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:48.368923 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.368970 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:48.377708 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:48.377782 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:48.395824 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.409558 1401287 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:48.410568 1401287 api_server.go:72] duration metric: took 626.001878ms to wait for apiserver process to appear ...
	I0926 22:29:48.410598 1401287 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:29:48.410621 1401287 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:29:48.424990 1401287 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:29:48.427236 1401287 api_server.go:141] control plane version: v1.34.0
	I0926 22:29:48.427333 1401287 api_server.go:131] duration metric: took 16.7257ms to wait for apiserver health ...
	I0926 22:29:48.427359 1401287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:29:48.434147 1401287 system_pods.go:59] 7 kube-system pods found
	I0926 22:29:48.434185 1401287 system_pods.go:61] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434195 1401287 system_pods.go:61] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434206 1401287 system_pods.go:61] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.434221 1401287 system_pods.go:61] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.434230 1401287 system_pods.go:61] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.434237 1401287 system_pods.go:61] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.434245 1401287 system_pods.go:61] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.434254 1401287 system_pods.go:74] duration metric: took 6.877162ms to wait for pod list to return data ...
	I0926 22:29:48.434265 1401287 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:29:48.437910 1401287 default_sa.go:45] found service account: "default"
	I0926 22:29:48.437986 1401287 default_sa.go:55] duration metric: took 3.713655ms for default service account to be created ...
	I0926 22:29:48.438009 1401287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:29:48.449749 1401287 system_pods.go:86] 7 kube-system pods found
	I0926 22:29:48.449859 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449883 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449933 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.449956 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.449992 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.450028 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.450047 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.450113 1401287 retry.go:31] will retry after 220.911414ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.454420 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.454446 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:48.467995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.486003 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:48.486043 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:48.505966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:48.506005 1401287 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:48.519158 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:48.533016 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.564879 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:48.613388 1401287 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.613410 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:48.638555 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:48.678611 1401287 system_pods.go:86] 8 kube-system pods found
	I0926 22:29:48.678647 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending
	I0926 22:29:48.678660 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678669 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678691 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.678698 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:48.678709 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.678717 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.678724 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.678749 1401287 retry.go:31] will retry after 325.08055ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.694878 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:48.694910 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:48.717411 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.874966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:48.875006 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:48.915620 1401287 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-619347" context rescaled to 1 replicas
	I0926 22:29:48.947182 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:48.947278 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:49.013309 1401287 system_pods.go:86] 9 kube-system pods found
	I0926 22:29:49.013412 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.013424 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013461 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013471 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.013525 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.013537 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.013546 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.013553 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.013560 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.013636 1401287 retry.go:31] will retry after 486.746944ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.102910 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:49.102950 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:49.259460 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:49.259504 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:49.377226 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:49.377250 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:49.493928 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:49.493968 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:49.517924 1401287 system_pods.go:86] 14 kube-system pods found
	I0926 22:29:49.517990 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.518004 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518013 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518022 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.518044 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.518055 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.518063 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:49.518072 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.518081 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.518100 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.518123 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:49.518143 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:49.518154 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:49.518165 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending
	I0926 22:29:49.518211 1401287 retry.go:31] will retry after 599.651697ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.625802 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:50.130675 1401287 system_pods.go:86] 15 kube-system pods found
	I0926 22:29:50.130828 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:50.130842 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130854 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130861 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running
	I0926 22:29:50.130866 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:50.130875 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:50.130885 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:50.130892 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Running
	I0926 22:29:50.130900 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:50.130908 1401287 system_pods.go:89] "metrics-server-85b7d694d7-mjlqr" [18663e65-efc9-4e15-8dad-c4e23a7f7f18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:29:50.130924 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:50.130932 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:50.130942 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:50.130951 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:50.130958 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:29:50.130969 1401287 system_pods.go:126] duration metric: took 1.692943423s to wait for k8s-apps to be running ...
	I0926 22:29:50.130981 1401287 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:29:50.131036 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:29:50.228682 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.066443039s)
	I0926 22:29:50.228730 1401287 addons.go:479] Verifying addon ingress=true in "addons-619347"
	I0926 22:29:50.229183 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.06360117s)
	I0926 22:29:50.229277 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.06027927s)
	I0926 22:29:50.229386 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.046934043s)
	W0926 22:29:50.229417 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229439 1401287 retry.go:31] will retry after 244.753675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229506 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040735105s)
	I0926 22:29:50.229590 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040703194s)
	I0926 22:29:50.229630 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.011384775s)
	I0926 22:29:50.229674 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00860092s)
	I0926 22:29:50.229967 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.834111385s)
	I0926 22:29:50.229990 1401287 addons.go:479] Verifying addon registry=true in "addons-619347"
	I0926 22:29:50.230454 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.762415616s)
	I0926 22:29:50.230635 1401287 addons.go:479] Verifying addon metrics-server=true in "addons-619347"
	I0926 22:29:50.230518 1401287 out.go:179] * Verifying ingress addon...
	I0926 22:29:50.233574 1401287 out.go:179] * Verifying registry addon...
	I0926 22:29:50.234496 1401287 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:29:50.236422 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:29:50.239932 1401287 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:29:50.239997 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.242126 1401287 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:29:50.242195 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:50.474912 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:50.747610 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.749841 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.178335 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.659134928s)
	I0926 22:29:51.178429 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.645380917s)
	I0926 22:29:51.178600 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.613538879s)
	I0926 22:29:51.178880 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.540232302s)
	I0926 22:29:51.179022 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461568485s)
	W0926 22:29:51.179054 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.179074 1401287 retry.go:31] will retry after 372.721698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.180773 1401287 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-619347 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:29:51.223913 1401287 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.092854415s)
	I0926 22:29:51.223952 1401287 system_svc.go:56] duration metric: took 1.092967022s WaitForService to wait for kubelet
	I0926 22:29:51.223963 1401287 kubeadm.go:586] duration metric: took 3.439402099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:51.223986 1401287 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:29:51.224342 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.598487819s)
	I0926 22:29:51.224378 1401287 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:51.225939 1401287 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:29:51.228192 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:29:51.229798 1401287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:29:51.229833 1401287 node_conditions.go:123] node cpu capacity is 8
	I0926 22:29:51.229856 1401287 node_conditions.go:105] duration metric: took 5.863751ms to run NodePressure ...
	I0926 22:29:51.229880 1401287 start.go:241] waiting for startup goroutines ...
	I0926 22:29:51.234026 1401287 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:29:51.234047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.241936 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.243854 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.552700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:51.709711 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.234742831s)
	W0926 22:29:51.709760 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.709786 1401287 retry.go:31] will retry after 268.370333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.732520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.738383 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.739361 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.978851 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:52.231665 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.237879 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.238844 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:52.731592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.738117 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.739055 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.232517 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.237333 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.239471 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.731711 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.737791 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.738851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.244329 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691529274s)
	I0926 22:29:54.244428 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.26554658s)
	W0926 22:29:54.244461 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.244491 1401287 retry.go:31] will retry after 392.451192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.303455 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.303472 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.303697 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.637695 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.732408 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.737348 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.738840 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:55.209616 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.209647 1401287 retry.go:31] will retry after 748.885115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.232030 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.238153 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.239111 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.331196 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:55.331261 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.348751 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.457803 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:29:55.479373 1401287 addons.go:238] Setting addon gcp-auth=true in "addons-619347"
	I0926 22:29:55.479441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:55.479850 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:55.499515 1401287 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:29:55.499611 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.520325 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.618144 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:55.619415 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:29:55.621107 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:29:55.621131 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:29:55.643383 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:29:55.643405 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:29:55.664765 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.664789 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:29:55.685778 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.732904 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.737583 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.739755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.958754 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:56.145831 1401287 addons.go:479] Verifying addon gcp-auth=true in "addons-619347"
	I0926 22:29:56.147565 1401287 out.go:179] * Verifying gcp-auth addon...
	I0926 22:29:56.149656 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:29:56.153451 1401287 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:29:56.153473 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.234575 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:56.238524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.240547 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.753812 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.754009 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.754105 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.754175 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:29:56.846438 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:56.846489 1401287 retry.go:31] will retry after 1.306898572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:57.154380 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.257757 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:57.257867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.257914 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.653373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.731799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.738612 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.739139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.153929 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:58.154158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.231698 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.238196 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.239871 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.653423 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.732047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.737700 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.739381 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:58.876131 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:58.876169 1401287 retry.go:31] will retry after 1.510195391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:59.153627 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.231973 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.237626 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.239442 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:59.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.732199 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.737381 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.739318 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.154349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.234946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.237553 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.238970 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.387250 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.653371 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.754562 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.754718 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.754737 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:01.142390 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.142433 1401287 retry.go:31] will retry after 2.823589735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.153470 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.231864 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.238191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.238929 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:01.653817 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.732601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.738292 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.738765 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.153510 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.232061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:02.237606 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.239333 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.653691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.785100 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.785181 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.785282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.228531 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.231398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.237322 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.239087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.653658 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.754788 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.754892 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.754903 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.966722 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.154061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:04.232281 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.240238 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.653129 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:04.657965 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.657997 1401287 retry.go:31] will retry after 3.931075545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.732441 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.738568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.739156 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.153676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.231619 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.237952 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.238902 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.653858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.732363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.737932 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.739708 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.153005 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.232588 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.238508 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.238930 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.653625 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.732133 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.737660 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.739398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.153662 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.231544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.238376 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.238896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.653623 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.732168 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.737693 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.739572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.153679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.231882 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.237268 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.239112 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.589607 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:08.653128 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.732858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.737867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.739211 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.153590 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.232224 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.237615 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.239714 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:09.284897 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.284936 1401287 retry.go:31] will retry after 5.203674911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.653321 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.731879 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.737435 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.739163 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.153976 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.232225 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.237891 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.239799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.652648 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.732289 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.740552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.740620 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.153709 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.231772 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.237915 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.238911 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.653574 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.731464 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.737883 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.738742 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.154161 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.255109 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.255143 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.255266 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.653341 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.732278 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.737987 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.739675 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.152601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.231735 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.238458 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.238993 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.653963 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.732677 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.737942 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.738815 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.153349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.231707 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.238128 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.238724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.489029 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.654034 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.755687 1401287 kapi.go:107] duration metric: took 24.519261155s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:14.755725 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.755952 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.152792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:15.222551 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.222596 1401287 retry.go:31] will retry after 5.506436948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.231403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.237852 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.662260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.731552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.738097 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.154099 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.231851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.237284 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.653593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.732118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.737657 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.153191 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.232638 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.238260 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.654087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.732572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.737869 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.153497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.231724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.237938 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.653474 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.754180 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.754664 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.153672 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.231937 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.237429 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.653500 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.732332 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.737902 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.153193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.231558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.238229 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.653596 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.729807 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:20.755463 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.755497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.156185 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.232540 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.237339 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:21.506242 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.506283 1401287 retry.go:31] will retry after 16.573257161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.653673 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.746511 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.747024 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.154193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.255191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.255336 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.653679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.732249 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.765524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.153260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.232592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.237546 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.653954 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.732247 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.738249 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.153348 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.231679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.238206 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.653640 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.754172 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.754291 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.155071 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.232312 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.237762 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.654098 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.755772 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.756117 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.153020 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.232253 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.237493 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.653784 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.731755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.738149 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.153957 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.231912 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.237304 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.740418 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.740422 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.740489 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.153035 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.232351 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.253652 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.653198 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.732594 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.738617 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.153818 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.255363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.255402 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.653377 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.795403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.795568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.154437 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.255203 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.255255 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.654322 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.731875 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.738025 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.153152 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.232403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.686139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.732196 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.737642 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.153176 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.232567 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.238193 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.653520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.731607 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.738120 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.153329 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.231836 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.238090 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.653138 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.753505 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.753695 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.153545 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.232120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.237425 1401287 kapi.go:107] duration metric: took 44.002941806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:30:34.654015 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.732058 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.153560 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.232023 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.653149 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.733392 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.195661 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.294162 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.653726 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.732044 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.153456 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.231729 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.653114 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.732251 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.080636 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:38.154372 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.231375 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.653809 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.782691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:30:38.852949 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.852986 1401287 retry.go:31] will retry after 15.881899723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:39.153131 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.232352 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.653465 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.731259 1401287 kapi.go:107] duration metric: took 48.503064069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:30:40.153304 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.652405 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.153555 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.652676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.152544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.653090 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.153739 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.652905 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.153461 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.653397 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.652913 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.153414 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.652678 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.153158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.653282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.152600 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.652859 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.153593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.652792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.152790 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.652641 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.153977 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.653558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.153042 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.653062 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.153284 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.653232 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.153389 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.653118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.735407 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:55.153085 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:55.342933 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.342967 1401287 retry.go:31] will retry after 26.788650375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.653379 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.653069 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.153833 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.153701 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.653075 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.153896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.652981 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.152946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.653566 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.152984 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.653887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.153373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.654120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.153468 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.653248 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.153804 1401287 kapi.go:107] duration metric: took 1m8.004150077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:04.155559 1401287 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-619347 cluster.
	I0926 22:31:04.156826 1401287 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:04.158107 1401287 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:22.132659 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:22.704256 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0926 22:31:22.704391 1401287 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:22.706313 1401287 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, volcano, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0926 22:31:22.707981 1401287 addons.go:514] duration metric: took 1m34.923379678s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass volcano registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0926 22:31:22.708039 1401287 start.go:246] waiting for cluster config update ...
	I0926 22:31:22.708063 1401287 start.go:255] writing updated cluster config ...
	I0926 22:31:22.708371 1401287 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:22.712517 1401287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:22.716253 1401287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.720372 1401287 pod_ready.go:94] pod "coredns-66bc5c9577-qctdw" is "Ready"
	I0926 22:31:22.720398 1401287 pod_ready.go:86] duration metric: took 4.121653ms for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.722139 1401287 pod_ready.go:83] waiting for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.725796 1401287 pod_ready.go:94] pod "etcd-addons-619347" is "Ready"
	I0926 22:31:22.725814 1401287 pod_ready.go:86] duration metric: took 3.654877ms for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.727751 1401287 pod_ready.go:83] waiting for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.731230 1401287 pod_ready.go:94] pod "kube-apiserver-addons-619347" is "Ready"
	I0926 22:31:22.731252 1401287 pod_ready.go:86] duration metric: took 3.484052ms for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.733085 1401287 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.117180 1401287 pod_ready.go:94] pod "kube-controller-manager-addons-619347" is "Ready"
	I0926 22:31:23.117210 1401287 pod_ready.go:86] duration metric: took 384.107267ms for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.316538 1401287 pod_ready.go:83] waiting for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.716914 1401287 pod_ready.go:94] pod "kube-proxy-sdscg" is "Ready"
	I0926 22:31:23.716945 1401287 pod_ready.go:86] duration metric: took 400.37971ms for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.917057 1401287 pod_ready.go:83] waiting for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316600 1401287 pod_ready.go:94] pod "kube-scheduler-addons-619347" is "Ready"
	I0926 22:31:24.316631 1401287 pod_ready.go:86] duration metric: took 399.543309ms for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316645 1401287 pod_ready.go:40] duration metric: took 1.604097264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:24.363816 1401287 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:24.365720 1401287 out.go:179] * Done! kubectl is now configured to use "addons-619347" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 26 22:37:33 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:37:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67932f7f901a55b118d183f4f628a937190e9c1ce489d6dfa9182a92804e46ec/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:37:33 addons-619347 dockerd[1116]: time="2025-09-26T22:37:33.389045626Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:37:33 addons-619347 dockerd[1116]: time="2025-09-26T22:37:33.420898973Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:37:46 addons-619347 dockerd[1116]: time="2025-09-26T22:37:46.332274758Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:37:46 addons-619347 dockerd[1116]: time="2025-09-26T22:37:46.364444562Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:08 addons-619347 dockerd[1116]: time="2025-09-26T22:38:08.332232521Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:38:08 addons-619347 dockerd[1116]: time="2025-09-26T22:38:08.421584413Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:08 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:38:08Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 26 22:38:14 addons-619347 dockerd[1116]: time="2025-09-26T22:38:14.410829521Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:42 addons-619347 dockerd[1116]: time="2025-09-26T22:38:42.448587681Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:52 addons-619347 dockerd[1116]: time="2025-09-26T22:38:52.328938208Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:38:52 addons-619347 dockerd[1116]: time="2025-09-26T22:38:52.362556615Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:39:33 addons-619347 dockerd[1116]: time="2025-09-26T22:39:33.432864462Z" level=info msg="ignoring event" container=67932f7f901a55b118d183f4f628a937190e9c1ce489d6dfa9182a92804e46ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:40:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02ed02bee6d73780bddc06cb8b6a6b9f7bca62787f463b8a37a9607797b22ec3/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:40:03 addons-619347 dockerd[1116]: time="2025-09-26T22:40:03.827570082Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:03 addons-619347 dockerd[1116]: time="2025-09-26T22:40:03.951061919Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:40:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:40:03Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 26 22:40:18 addons-619347 dockerd[1116]: time="2025-09-26T22:40:18.337533725Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:18 addons-619347 dockerd[1116]: time="2025-09-26T22:40:18.367998157Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:40:24 addons-619347 dockerd[1116]: time="2025-09-26T22:40:24.764931443Z" level=info msg="ignoring event" container=02ed02bee6d73780bddc06cb8b6a6b9f7bca62787f463b8a37a9607797b22ec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.679587150Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.706418819Z" level=info msg="ignoring event" container=8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:49 addons-619347 dockerd[1116]: time="2025-09-26T22:40:49.848919751Z" level=info msg="ignoring event" container=287d670c65c8c8a8873127a7df0f4d937218417b7d71e0eba154a8654e5c7081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:41:04 addons-619347 dockerd[1116]: time="2025-09-26T22:41:04.418844414Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:41:26 addons-619347 dockerd[1116]: time="2025-09-26T22:41:26.400897955Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	68f3619046214       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   a60bbae2dab32       busybox
	ce8cf08b141fd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	7dcfe799d3773       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	931b17716c09b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   de2617410b653       csi-hostpathplugin-rbzvs
	7d61fc01cddfd       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   de2617410b653       csi-hostpathplugin-rbzvs
	1dafa88bf03ff       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   de2617410b653       csi-hostpathplugin-rbzvs
	728e0cf65646d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             11 minutes ago      Running             controller                               0                   db768efcc91e0       ingress-nginx-controller-9cc49f96f-ghq9n
	4830d4a0f03bf       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   b9ce75482df7a       csi-hostpath-resizer-0
	2272adc16d5b8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   de2617410b653       csi-hostpathplugin-rbzvs
	e37820c539b12       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   84987d9e1a070       csi-hostpath-attacher-0
	4cc3707d46bf8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   3dd9df25ed9e8       snapshot-controller-7d9fbc56b8-2zg9l
	d91e1c6dab5ec       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   f5d5e2661efee       snapshot-controller-7d9fbc56b8-ml295
	2be186df9d067       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   11 minutes ago      Exited              patch                                    0                   e4cb125881f09       ingress-nginx-admission-patch-65dgz
	64e745dd36107       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   11 minutes ago      Exited              create                                   0                   8c9898018e8fa       ingress-nginx-admission-create-dbtd8
	a6d48b6dd738f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            11 minutes ago      Running             gadget                                   0                   1e350b656bd65       gadget-9rfhl
	6c95150654506       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         11 minutes ago      Running             minikube-ingress-dns                     0                   8f1cf5e8da338       kube-ingress-dns-minikube
	d9822a41079f6       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   e7dd4d41d742b       storage-provisioner
	9ea233eb6b299       52546a367cc9e                                                                                                                                11 minutes ago      Running             coredns                                  0                   3dff0fbc29922       coredns-66bc5c9577-qctdw
	227d066a100ce       df0860106674d                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   9cd7f6237aa02       kube-proxy-sdscg
	f5b2050f68de5       a0af72f2ec6d6                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   fbe20fd4325ef       kube-controller-manager-addons-619347
	8209664c099ee       46169d968e920                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   779a1e971ca62       kube-scheduler-addons-619347
	9d1b130b03b02       90550c43ad2bc                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   f2516f75f5542       kube-apiserver-addons-619347
	5ae0da6e5bfbf       5f1f5298c888d                                                                                                                                12 minutes ago      Running             etcd                                     0                   fa78f9e958055       etcd-addons-619347
	
	
	==> controller_ingress [728e0cf65646] <==
	I0926 22:30:34.887711       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f6bd66b9-f1c6-476b-a596-e7c7ed771583", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0926 22:30:36.082816       7 nginx.go:319] "Starting NGINX process"
	I0926 22:30:36.082933       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0926 22:30:36.083217       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0926 22:30:36.083836       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:30:36.090331       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0926 22:30:36.090382       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-ghq9n"
	I0926 22:30:36.093730       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	I0926 22:30:36.132936       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:30:36.133071       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0926 22:30:36.133167       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0926 22:30:36.196634       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	W0926 22:35:31.376780       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0926 22:35:31.378855       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0926 22:35:31.381841       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0926 22:35:31.382122       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:32.307592       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:32.308416       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:35:32.351887       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:35:32.352086       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0926 22:35:35.641811       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:36.097517       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0926 22:35:36.101827       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2279", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:38.975235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0926 22:35:42.307868       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [9ea233eb6b29] <==
	[INFO] 10.244.0.8:44508 - 52121 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000116925s
	[INFO] 10.244.0.8:40588 - 27017 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090303s
	[INFO] 10.244.0.8:40588 - 26695 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000142751s
	[INFO] 10.244.0.8:32780 - 27322 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000091822s
	[INFO] 10.244.0.8:32780 - 26988 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130714s
	[INFO] 10.244.0.8:34268 - 17213 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132338s
	[INFO] 10.244.0.8:34268 - 16970 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009144s
	[INFO] 10.244.0.27:32935 - 45410 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000327431s
	[INFO] 10.244.0.27:49406 - 23181 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000412135s
	[INFO] 10.244.0.27:42691 - 10663 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129221s
	[INFO] 10.244.0.27:49167 - 28887 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157287s
	[INFO] 10.244.0.27:40544 - 36384 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160696s
	[INFO] 10.244.0.27:45145 - 3022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123636s
	[INFO] 10.244.0.27:57336 - 33875 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00499531s
	[INFO] 10.244.0.27:41391 - 16202 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005792959s
	[INFO] 10.244.0.27:59854 - 59303 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005004398s
	[INFO] 10.244.0.27:34824 - 56259 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005925015s
	[INFO] 10.244.0.27:36869 - 29305 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004734879s
	[INFO] 10.244.0.27:45437 - 987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00498032s
	[INFO] 10.244.0.27:47010 - 60828 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005607554s
	[INFO] 10.244.0.27:46662 - 45152 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007088447s
	[INFO] 10.244.0.27:60306 - 17345 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000925116s
	[INFO] 10.244.0.27:50259 - 39178 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001983867s
	[INFO] 10.244.0.32:60236 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345556s
	[INFO] 10.244.0.32:41492 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202476s
	
	
	==> describe nodes <==
	Name:               addons-619347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-619347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-619347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-619347
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-619347"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-619347
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:41:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-619347
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 0728f6ac4f7f4421b7f9eeb1f21a8502
	  System UUID:                bfe74e22-ee1d-47b3-9c54-c1f6ef287d9d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-9rfhl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-ghq9n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-qctdw                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-rbzvs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-619347                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-619347                250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-619347       200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-sdscg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-619347                100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-2zg9l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-ml295        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node addons-619347 event: Registered Node addons-619347 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 b0 60 62 2a 0f 08 06
	[  +2.140079] IPv4: martian source 10.244.0.8 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.023792] IPv4: martian source 10.244.0.8 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[  +1.345643] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	
	
	==> etcd [5ae0da6e5bfb] <==
	{"level":"warn","ts":"2025-09-26T22:29:51.882030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:56.750352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.699546ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:29:56.750520Z","caller":"traceutil/trace.go:172","msg":"trace[545120805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"124.835239ms","start":"2025-09-26T22:29:56.625613Z","end":"2025-09-26T22:29:56.750448Z","steps":["trace[545120805] 'range keys from in-memory index tree'  (duration: 124.657379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:29:56.750545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.950667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040237519390281 > lease_revoke:<id:70cc9988256cc037>","response":"size:29"}
	{"level":"info","ts":"2025-09-26T22:29:56.750622Z","caller":"traceutil/trace.go:172","msg":"trace[550957012] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"110.947341ms","start":"2025-09-26T22:29:56.639663Z","end":"2025-09-26T22:29:56.750610Z","steps":["trace[550957012] 'read index received'  (duration: 40.919µs)","trace[550957012] 'applied index is now lower than readState.Index'  (duration: 110.905488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:29:56.750818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.149289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/gadget/gadget-role\" limit:1 ","response":"range_response_count:1 size:929"}
	{"level":"info","ts":"2025-09-26T22:29:56.750855Z","caller":"traceutil/trace.go:172","msg":"trace[1241908482] range","detail":"{range_begin:/registry/roles/gadget/gadget-role; range_end:; response_count:1; response_revision:1029; }","duration":"111.19346ms","start":"2025-09-26T22:29:56.639653Z","end":"2025-09-26T22:29:56.750846Z","steps":["trace[1241908482] 'agreement among raft nodes before linearized reading'  (duration: 111.040998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:11.365351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T22:30:11.365445Z","caller":"traceutil/trace.go:172","msg":"trace[2098668277] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1075; }","duration":"101.279805ms","start":"2025-09-26T22:30:11.264150Z","end":"2025-09-26T22:30:11.365430Z","steps":["trace[2098668277] 'range keys from in-memory index tree'  (duration: 101.005862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:16.821834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.870731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.885825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.893998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.925127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.935713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.946969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.959548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.971710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.978983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.988574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.999879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:30:33.500976Z","caller":"traceutil/trace.go:172","msg":"trace[381421690] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"106.589616ms","start":"2025-09-26T22:30:33.394366Z","end":"2025-09-26T22:30:33.500955Z","steps":["trace[381421690] 'process raft request'  (duration: 106.446725ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:39:38.933371Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1764}
	{"level":"info","ts":"2025-09-26T22:39:38.967116Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1764,"took":"33.118081ms","hash":3118520929,"current-db-size-bytes":9007104,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":6070272,"current-db-size-in-use":"6.1 MB"}
	{"level":"info","ts":"2025-09-26T22:39:38.967158Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3118520929,"revision":1764,"compact-revision":-1}
	
	
	==> kernel <==
	 22:41:46 up  4:24,  0 users,  load average: 2.61, 2.03, 1.76
	Linux addons-619347 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9d1b130b03b0] <==
	W0926 22:34:43.798029       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0926 22:34:43.851264       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0926 22:34:43.878591       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0926 22:34:43.906988       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0926 22:34:43.944869       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0926 22:34:44.127664       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0926 22:34:45.317063       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0926 22:35:01.714574       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41168: use of closed network connection
	E0926 22:35:01.904835       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41184: use of closed network connection
	I0926 22:35:11.393202       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.151.186"}
	I0926 22:35:31.379659       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0926 22:35:31.551505       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.109.176"}
	I0926 22:35:49.845467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:14.663730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:27.796093       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0926 22:37:00.802682       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:18.863268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:08.393742       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:27.210526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:17.734882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:39.792776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:39:49.815861       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:38.099623       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:10.371771       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:41.275865       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f5b2050f68de] <==
	E0926 22:40:49.555012       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:49.556126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:55.488951       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:55.489903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:58.144930       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:58.145946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:01.780946       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0926 22:41:08.444737       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:08.445704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:08.749548       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:08.750582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:15.816555       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:15.817588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:16.781927       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0926 22:41:21.126058       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:21.127152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:31.782386       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0926 22:41:31.907344       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:31.908326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:36.675543       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:36.676602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:41.301581       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:41.302654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:41:42.209797       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:41:42.211036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [227d066a100c] <==
	I0926 22:29:48.632051       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:29:48.823798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:48.926913       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:48.926974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:29:48.927216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:48.966553       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:29:48.966624       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:48.976081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:48.977337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:48.977360       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:48.983888       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:48.983916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:48.984026       1 config.go:200] "Starting service config controller"
	I0926 22:29:48.984052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:48.984116       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:48.984123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:48.987589       1 config.go:309] "Starting node config controller"
	I0926 22:29:48.987610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:48.987619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:49.084696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:29:49.084764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:49.085094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8209664c099e] <==
	E0926 22:29:39.815534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:39.815639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:39.815741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:39.815842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:39.815881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:39.815924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:39.815978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:39.816083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:39.816079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:39.816116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:39.816205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:29:39.816287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:39.816442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:39.816526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.634465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:40.655555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.682056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:40.739683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:40.750044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:40.783186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:40.869968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:40.950295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:41.003301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:41.010326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0926 22:29:41.412472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.923899    2321 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45w7j\" (UniqueName: \"kubernetes.io/projected/5be742ee-b26c-4ab5-bb91-8cb15e1db086-kube-api-access-45w7j\") pod \"5be742ee-b26c-4ab5-bb91-8cb15e1db086\" (UID: \"5be742ee-b26c-4ab5-bb91-8cb15e1db086\") "
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.924380    2321 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be742ee-b26c-4ab5-bb91-8cb15e1db086-config-volume" (OuterVolumeSpecName: "config-volume") pod "5be742ee-b26c-4ab5-bb91-8cb15e1db086" (UID: "5be742ee-b26c-4ab5-bb91-8cb15e1db086"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.926013    2321 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be742ee-b26c-4ab5-bb91-8cb15e1db086-kube-api-access-45w7j" (OuterVolumeSpecName: "kube-api-access-45w7j") pod "5be742ee-b26c-4ab5-bb91-8cb15e1db086" (UID: "5be742ee-b26c-4ab5-bb91-8cb15e1db086"). InnerVolumeSpecName "kube-api-access-45w7j". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.967738    2321 scope.go:117] "RemoveContainer" containerID="8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0"
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.984249    2321 scope.go:117] "RemoveContainer" containerID="8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0"
	Sep 26 22:40:49 addons-619347 kubelet[2321]: E0926 22:40:49.985272    2321 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0" containerID="8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0"
	Sep 26 22:40:49 addons-619347 kubelet[2321]: I0926 22:40:49.985324    2321 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0"} err="failed to get container status \"8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8225f70c7965537bfc5294b7b73cfa90ebbc03169d9c0f36398bba161eda2ab0"
	Sep 26 22:40:50 addons-619347 kubelet[2321]: I0926 22:40:50.024218    2321 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5be742ee-b26c-4ab5-bb91-8cb15e1db086-config-volume\") on node \"addons-619347\" DevicePath \"\""
	Sep 26 22:40:50 addons-619347 kubelet[2321]: I0926 22:40:50.024261    2321 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45w7j\" (UniqueName: \"kubernetes.io/projected/5be742ee-b26c-4ab5-bb91-8cb15e1db086-kube-api-access-45w7j\") on node \"addons-619347\" DevicePath \"\""
	Sep 26 22:40:50 addons-619347 kubelet[2321]: I0926 22:40:50.320185    2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be742ee-b26c-4ab5-bb91-8cb15e1db086" path="/var/lib/kubelet/pods/5be742ee-b26c-4ab5-bb91-8cb15e1db086/volumes"
	Sep 26 22:41:02 addons-619347 kubelet[2321]: E0926 22:41:02.310954    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:41:04 addons-619347 kubelet[2321]: E0926 22:41:04.421334    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:41:04 addons-619347 kubelet[2321]: E0926 22:41:04.421387    2321 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:41:04 addons-619347 kubelet[2321]: E0926 22:41:04.421470    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:41:04 addons-619347 kubelet[2321]: E0926 22:41:04.421527    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:41:13 addons-619347 kubelet[2321]: E0926 22:41:13.310108    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:41:15 addons-619347 kubelet[2321]: I0926 22:41:15.310679    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:41:18 addons-619347 kubelet[2321]: E0926 22:41:18.312542    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:41:26 addons-619347 kubelet[2321]: E0926 22:41:26.403691    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:41:26 addons-619347 kubelet[2321]: E0926 22:41:26.403748    2321 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:41:26 addons-619347 kubelet[2321]: E0926 22:41:26.403832    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(e055794c-8563-455d-956f-81e9b7627d09): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:41:26 addons-619347 kubelet[2321]: E0926 22:41:26.403862    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:41:31 addons-619347 kubelet[2321]: E0926 22:41:31.312991    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:41:37 addons-619347 kubelet[2321]: E0926 22:41:37.310660    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:41:44 addons-619347 kubelet[2321]: E0926 22:41:44.311823    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	
	
	==> storage-provisioner [d9822a41079f] <==
	W0926 22:41:21.241081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:23.244498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:23.248408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:25.251417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:25.255865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:27.259238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:27.263682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:29.266981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:29.270948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:31.273932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:31.279567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:33.283351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:33.288422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:35.292620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:35.296891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:37.300058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:37.304122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:39.307364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:39.311252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:41.313277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:41.316678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:43.320004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:43.326385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:45.331861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:41:45.337859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
helpers_test.go:269: (dbg) Run:  kubectl --context addons-619347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz: exit status 1 (81.070781ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq742 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jq742:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m15s                  default-scheduler  Successfully assigned default/nginx to addons-619347
	  Normal   Pulling    3m32s (x5 over 6m14s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m32s (x5 over 6m14s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m32s (x5 over 6m14s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    72s (x21 over 6m14s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     72s (x21 over 6m14s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xgkr7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xgkr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-619347
	  Warning  Failed     6m1s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m4s (x5 over 6m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m4s (x4 over 5m46s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    59s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s (x21 over 6m1s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfq8j (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sfq8j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dbtd8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-65dgz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.479815549s)
--- FAIL: TestAddons/parallel/CSI (385.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (344.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-619347 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-619347 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-619347 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (2.175µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-619347
helpers_test.go:243: (dbg) docker inspect addons-619347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	        "Created": "2025-09-26T22:29:24.504112175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1401920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:29:24.53667075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/hosts",
	        "LogPath": "/var/lib/docker/containers/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d/f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d-json.log",
	        "Name": "/addons-619347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-619347:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-619347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0caa77a58786e07b8d9d7cafc46cf1520daa88580e060469aff98344652db5d",
	                "LowerDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddf944e5ac1417ec2793c38ad4e9fe13c6283fa59ddce6003ae7a715d34daeba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-619347",
	                "Source": "/var/lib/docker/volumes/addons-619347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-619347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-619347",
	                "name.minikube.sigs.k8s.io": "addons-619347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3015286d67af8b7391959f3121ca363feb45d14fa55ccdc7193de806e7fe6e96",
	            "SandboxKey": "/var/run/docker/netns/3015286d67af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-619347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:cd:cb:d7:a7:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22f06ef7f1b3d4919d623039fdb7eaef892f9c8c0a7074ff47e8c48934f6f117",
	                    "EndpointID": "4b693477b2120ec160d127bc2bc90fabb016ebf45c34df1cad9bd2399ffdc1cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-619347",
	                        "f0caa77a5878"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-619347 -n addons-619347
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 logs -n 25
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-036757                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-036757   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-040048                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-040048   │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ --download-only -p download-docker-193843 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ -p download-docker-193843                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-193843 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-237584 --alsologtostderr --binary-mirror http://127.0.0.1:35911 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-237584                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-237584   │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ disable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-619347 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:34 UTC │ 26 Sep 25 22:34 UTC │
	│ addons  │ addons-619347 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ enable headlamp -p addons-619347 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ ip      │ addons-619347 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-619347                                                                                                                                                                                                                                                                                                                                                                                             │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-619347 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-619347          │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:01.756585 1401287 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:01.756707 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756717 1401287 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:01.756724 1401287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.756944 1401287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:29:01.757503 1401287 out.go:368] Setting JSON to false
	I0926 22:29:01.758423 1401287 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15086,"bootTime":1758910656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:01.758529 1401287 start.go:140] virtualization: kvm guest
	I0926 22:29:01.760350 1401287 out.go:179] * [addons-619347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:01.761510 1401287 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:01.761513 1401287 notify.go:220] Checking for updates...
	I0926 22:29:01.763728 1401287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:01.765716 1401287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:01.766946 1401287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:29:01.767993 1401287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:01.768984 1401287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:01.770171 1401287 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:01.792688 1401287 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:01.792779 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.845164 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.835526355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.845273 1401287 docker.go:318] overlay module found
	I0926 22:29:01.847734 1401287 out.go:179] * Using the docker driver based on user configuration
	I0926 22:29:01.848892 1401287 start.go:304] selected driver: docker
	I0926 22:29:01.848910 1401287 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:01.848922 1401287 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:01.849577 1401287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:01.899952 1401287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-26 22:29:01.890671576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:01.900135 1401287 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:01.900371 1401287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:01.902046 1401287 out.go:179] * Using Docker driver with root privileges
	I0926 22:29:01.903097 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:01.903175 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:01.903186 1401287 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:01.903270 1401287 start.go:348] cluster config:
	{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0926 22:29:01.904858 1401287 out.go:179] * Starting "addons-619347" primary control-plane node in "addons-619347" cluster
	I0926 22:29:01.906044 1401287 cache.go:123] Beginning downloading kic base image for docker with docker
	I0926 22:29:01.907356 1401287 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:29:01.908297 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:01.908335 1401287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0926 22:29:01.908345 1401287 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:01.908416 1401287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:01.908443 1401287 preload.go:172] Found /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:01.908453 1401287 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0926 22:29:01.908843 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:01.908883 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json: {Name:mkc2865f84bd589b8eae2eb83eded5267684d61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:01.925224 1401287 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:01.925402 1401287 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:01.925420 1401287 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:01.925428 1401287 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:01.925435 1401287 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:01.925439 1401287 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0926 22:29:14.155592 1401287 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0926 22:29:14.155633 1401287 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:29:14.155712 1401287 start.go:360] acquireMachinesLock for addons-619347: {Name:mk16a13d35eefb90d37e67ab9d542372a6292c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:14.155829 1401287 start.go:364] duration metric: took 91.725µs to acquireMachinesLock for "addons-619347"
	I0926 22:29:14.155856 1401287 start.go:93] Provisioning new machine with config: &{Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:14.155980 1401287 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:29:14.157562 1401287 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0926 22:29:14.157823 1401287 start.go:159] libmachine.API.Create for "addons-619347" (driver="docker")
	I0926 22:29:14.157858 1401287 client.go:168] LocalClient.Create starting
	I0926 22:29:14.158021 1401287 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem
	I0926 22:29:14.205932 1401287 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem
	I0926 22:29:14.366294 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:29:14.383620 1401287 cli_runner.go:211] docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:29:14.383691 1401287 network_create.go:284] running [docker network inspect addons-619347] to gather additional debugging logs...
	I0926 22:29:14.383716 1401287 cli_runner.go:164] Run: docker network inspect addons-619347
	W0926 22:29:14.399817 1401287 cli_runner.go:211] docker network inspect addons-619347 returned with exit code 1
	I0926 22:29:14.399876 1401287 network_create.go:287] error running [docker network inspect addons-619347]: docker network inspect addons-619347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-619347 not found
	I0926 22:29:14.399898 1401287 network_create.go:289] output of [docker network inspect addons-619347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-619347 not found
	
	** /stderr **
	I0926 22:29:14.400043 1401287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:14.417291 1401287 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ae9be0}
	I0926 22:29:14.417339 1401287 network_create.go:124] attempt to create docker network addons-619347 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:29:14.417382 1401287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-619347 addons-619347
	I0926 22:29:14.473127 1401287 network_create.go:108] docker network addons-619347 192.168.49.0/24 created
	I0926 22:29:14.473163 1401287 kic.go:121] calculated static IP "192.168.49.2" for the "addons-619347" container
	I0926 22:29:14.473252 1401287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:29:14.489293 1401287 cli_runner.go:164] Run: docker volume create addons-619347 --label name.minikube.sigs.k8s.io=addons-619347 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:29:14.506092 1401287 oci.go:103] Successfully created a docker volume addons-619347
	I0926 22:29:14.506161 1401287 cli_runner.go:164] Run: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:29:20.841341 1401287 cli_runner.go:217] Completed: docker run --rm --name addons-619347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --entrypoint /usr/bin/test -v addons-619347:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.335139978s)
	I0926 22:29:20.841369 1401287 oci.go:107] Successfully prepared a docker volume addons-619347
	I0926 22:29:20.841406 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:20.841430 1401287 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:29:20.841514 1401287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:29:24.436467 1401287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-619347:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.594814262s)
	I0926 22:29:24.436527 1401287 kic.go:203] duration metric: took 3.595091279s to extract preloaded images to volume ...
	W0926 22:29:24.436629 1401287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:29:24.436675 1401287 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:29:24.436720 1401287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:29:24.488860 1401287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-619347 --name addons-619347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-619347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-619347 --network addons-619347 --ip 192.168.49.2 --volume addons-619347:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:29:24.739034 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Running}}
	I0926 22:29:24.756901 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:24.774535 1401287 cli_runner.go:164] Run: docker exec addons-619347 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:29:24.821732 1401287 oci.go:144] the created container "addons-619347" has a running status.
	I0926 22:29:24.821762 1401287 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa...
	I0926 22:29:25.058873 1401287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:29:25.084720 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.103222 1401287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:29:25.103256 1401287 kic_runner.go:114] Args: [docker exec --privileged addons-619347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:29:25.152057 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:25.171032 1401287 machine.go:93] provisionDockerMachine start ...
	I0926 22:29:25.171165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.192356 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.192770 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.192789 1401287 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:29:25.329327 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.329360 1401287 ubuntu.go:182] provisioning hostname "addons-619347"
	I0926 22:29:25.329440 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.347623 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.347852 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.347866 1401287 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-619347 && echo "addons-619347" | sudo tee /etc/hostname
	I0926 22:29:25.495671 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-619347
	
	I0926 22:29:25.495764 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:25.513361 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.513676 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:25.513706 1401287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-619347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-619347/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-619347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.648127 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.648158 1401287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-1396392/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-1396392/.minikube}
	I0926 22:29:25.648181 1401287 ubuntu.go:190] setting up certificates
	I0926 22:29:25.648194 1401287 provision.go:84] configureAuth start
	I0926 22:29:25.648256 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:25.665581 1401287 provision.go:143] copyHostCerts
	I0926 22:29:25.665655 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.665964 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.666216 1401287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-1396392/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.666332 1401287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem org=jenkins.addons-619347 san=[127.0.0.1 192.168.49.2 addons-619347 localhost minikube]
	I0926 22:29:26.345521 1401287 provision.go:177] copyRemoteCerts
	I0926 22:29:26.345589 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:26.345626 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.363376 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:26.461182 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:26.487057 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:26.511222 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 22:29:26.535844 1401287 provision.go:87] duration metric: took 887.635192ms to configureAuth
	I0926 22:29:26.535878 1401287 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:29:26.536095 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:26.536165 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.554135 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.554419 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.554438 1401287 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 22:29:26.690395 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0926 22:29:26.690420 1401287 ubuntu.go:71] root file system type: overlay
	I0926 22:29:26.690565 1401287 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 22:29:26.690630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.708389 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.708653 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.708753 1401287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 22:29:26.857459 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 22:29:26.857566 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:26.875261 1401287 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.875543 1401287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I0926 22:29:26.875567 1401287 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 22:29:27.972927 1401287 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-26 22:29:26.855075288 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0926 22:29:27.972953 1401287 machine.go:96] duration metric: took 2.801887579s to provisionDockerMachine
	I0926 22:29:27.972966 1401287 client.go:171] duration metric: took 13.815098068s to LocalClient.Create
	I0926 22:29:27.972989 1401287 start.go:167] duration metric: took 13.815166582s to libmachine.API.Create "addons-619347"
	I0926 22:29:27.972999 1401287 start.go:293] postStartSetup for "addons-619347" (driver="docker")
	I0926 22:29:27.973014 1401287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:27.973075 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:27.973123 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:27.990436 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.088898 1401287 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:28.092357 1401287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:29:28.092381 1401287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:29:28.092389 1401287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:29:28.092397 1401287 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:29:28.092411 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/addons for local assets ...
	I0926 22:29:28.092496 1401287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-1396392/.minikube/files for local assets ...
	I0926 22:29:28.092533 1401287 start.go:296] duration metric: took 119.526658ms for postStartSetup
	I0926 22:29:28.092888 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.110347 1401287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/config.json ...
	I0926 22:29:28.110666 1401287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:29:28.110720 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.127963 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.219507 1401287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:29:28.223820 1401287 start.go:128] duration metric: took 14.067824148s to createHost
	I0926 22:29:28.223850 1401287 start.go:83] releasing machines lock for "addons-619347", held for 14.068007272s
	I0926 22:29:28.223922 1401287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-619347
	I0926 22:29:28.240598 1401287 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:28.240633 1401287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:28.240652 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.240703 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:28.257372 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.258797 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:28.423810 1401287 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:28.428533 1401287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:29:28.433038 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0926 22:29:28.461936 1401287 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:29:28.462028 1401287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:28.488392 1401287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:29:28.488420 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.488455 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.488593 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.505081 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0926 22:29:28.516249 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 22:29:28.526291 1401287 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0926 22:29:28.526353 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0926 22:29:28.536220 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.546282 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 22:29:28.556108 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 22:29:28.565920 1401287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:28.575000 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 22:29:28.584684 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 22:29:28.594441 1401287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 22:29:28.604436 1401287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:28.612926 1401287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:28.621307 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:28.686706 1401287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 22:29:28.765771 1401287 start.go:495] detecting cgroup driver to use...
	I0926 22:29:28.765825 1401287 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:28.765881 1401287 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 22:29:28.778235 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.789193 1401287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:28.806369 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:28.817718 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 22:29:28.828841 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:28.845391 1401287 ssh_runner.go:195] Run: which cri-dockerd
	I0926 22:29:28.848841 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 22:29:28.859051 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0926 22:29:28.876661 1401287 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 22:29:28.939711 1401287 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 22:29:29.006868 1401287 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0926 22:29:29.007006 1401287 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0926 22:29:29.025882 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0926 22:29:29.037344 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:29.102031 1401287 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 22:29:29.866941 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:29.878676 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 22:29:29.890349 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:29.901859 1401287 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 22:29:29.971712 1401287 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 22:29:30.041653 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.108440 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 22:29:30.127589 1401287 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0926 22:29:30.138450 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.204543 1401287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 22:29:30.280240 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 22:29:30.292074 1401287 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 22:29:30.292147 1401287 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 22:29:30.295851 1401287 start.go:563] Will wait 60s for crictl version
	I0926 22:29:30.295920 1401287 ssh_runner.go:195] Run: which crictl
	I0926 22:29:30.299332 1401287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:30.334344 1401287 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0926 22:29:30.334407 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.359394 1401287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 22:29:30.385840 1401287 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0926 22:29:30.385911 1401287 cli_runner.go:164] Run: docker network inspect addons-619347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:30.402657 1401287 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:30.406689 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.418124 1401287 kubeadm.go:883] updating cluster {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:30.418244 1401287 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0926 22:29:30.418289 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.437981 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.438007 1401287 docker.go:621] Images already preloaded, skipping extraction
	I0926 22:29:30.438061 1401287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 22:29:30.457379 1401287 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 22:29:30.457402 1401287 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:30.457415 1401287 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0926 22:29:30.457550 1401287 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-619347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:30.457608 1401287 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 22:29:30.507568 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:30.507618 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:30.507640 1401287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:30.507666 1401287 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-619347 NodeName:addons-619347 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:30.507817 1401287 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-619347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:30.507878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:30.517618 1401287 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:30.517680 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:30.526766 1401287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:30.544641 1401287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:30.561976 1401287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:30.579430 1401287 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:30.582806 1401287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:30.593536 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:30.659215 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:30.680701 1401287 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347 for IP: 192.168.49.2
	I0926 22:29:30.680722 1401287 certs.go:195] generating shared ca certs ...
	I0926 22:29:30.680743 1401287 certs.go:227] acquiring lock for ca certs: {Name:mk6c7838cc2dce82903d545772166c35f6a8ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.680859 1401287 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key
	I0926 22:29:30.837572 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt ...
	I0926 22:29:30.837605 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt: {Name:mka8a7fba6c323e3efb5c337a110d874f4a069f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837797 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key ...
	I0926 22:29:30.837813 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key: {Name:mk5241bded4d58e8d730b5c39e3cb6b761b06b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:30.837926 1401287 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key
	I0926 22:29:31.379026 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt ...
	I0926 22:29:31.379062 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt: {Name:mk0b26827e7effdc6e0cb418dab9aa237c23935e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379267 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key ...
	I0926 22:29:31.379283 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key: {Name:mkc17ee61ac662bf18733fd6087e23ac2b546ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.379447 1401287 certs.go:257] generating profile certs ...
	I0926 22:29:31.379550 1401287 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key
	I0926 22:29:31.379571 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt with IP's: []
	I0926 22:29:31.863291 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt ...
	I0926 22:29:31.863331 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: {Name:mk25ddefd62aaf8d3e2f6d1fd2d519d1c2b1bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863552 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key ...
	I0926 22:29:31.863571 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.key: {Name:mk8cc05aa8f2753617dfe3d2ae365690c5c6ce86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.863711 1401287 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15
	I0926 22:29:31.863742 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:29:32.476987 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 ...
	I0926 22:29:32.477026 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15: {Name:mkd972c04e4a2418d910fa6a476af654883d90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477231 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 ...
	I0926 22:29:32.477251 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15: {Name:mk6e7ebd8b361ff43396ae1d43e26cc4b3fca9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.477363 1401287 certs.go:382] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt
	I0926 22:29:32.477503 1401287 certs.go:386] copying /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key.7bdc1e15 -> /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key
	I0926 22:29:32.477596 1401287 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key
	I0926 22:29:32.477626 1401287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt with IP's: []
	I0926 22:29:32.537971 1401287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt ...
	I0926 22:29:32.538009 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt: {Name:mkfbd9d4d456b434b04760e6c3778ba177b5caa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538198 1401287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key ...
	I0926 22:29:32.538217 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key: {Name:mkdbd77fea74f3adf740a694b7d5ff5142acf56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.538432 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 22:29:32.538493 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:32.538542 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:32.538584 1401287 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-1396392/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:32.539249 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:32.564650 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:29:32.589199 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:32.612819 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:32.636809 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:32.660922 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:32.684674 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:32.708845 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:29:32.732866 1401287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:32.759367 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:32.777459 1401287 ssh_runner.go:195] Run: openssl version
	I0926 22:29:32.783004 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:32.794673 1401287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798422 1401287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.798497 1401287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:32.805099 1401287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:32.814605 1401287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:32.817944 1401287 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:32.818016 1401287 kubeadm.go:400] StartCluster: {Name:addons-619347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-619347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:32.818116 1401287 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 22:29:32.836878 1401287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:32.846020 1401287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:32.855171 1401287 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:29:32.855233 1401287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:32.863903 1401287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:32.863919 1401287 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:32.863955 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:32.872442 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:32.872518 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:32.880882 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:32.889348 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:32.889394 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:32.897735 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.906508 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:32.906558 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:32.915447 1401287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:32.924534 1401287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:32.924590 1401287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:32.933327 1401287 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:29:32.971243 1401287 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:32.971298 1401287 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:33.008888 1401287 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:29:33.009014 1401287 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:29:33.009067 1401287 kubeadm.go:318] OS: Linux
	I0926 22:29:33.009160 1401287 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:29:33.009217 1401287 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:29:33.009313 1401287 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:29:33.009388 1401287 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:29:33.009472 1401287 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:29:33.009577 1401287 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:29:33.009649 1401287 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:29:33.009739 1401287 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:29:33.064493 1401287 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:33.064612 1401287 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:33.064736 1401287 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:33.076202 1401287 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:33.078537 1401287 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:33.078633 1401287 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:33.078712 1401287 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:33.613982 1401287 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:34.132193 1401287 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:34.241294 1401287 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:34.638661 1401287 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:34.928444 1401287 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:34.928596 1401287 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.122701 1401287 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:35.122888 1401287 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-619347 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:29:35.275604 1401287 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:35.549799 1401287 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:35.689108 1401287 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:35.689184 1401287 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:35.894121 1401287 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:36.122749 1401287 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:36.401681 1401287 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:36.449466 1401287 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:36.577737 1401287 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:36.578213 1401287 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:36.581892 1401287 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:36.583521 1401287 out.go:252]   - Booting up control plane ...
	I0926 22:29:36.583635 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:36.583735 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:36.584452 1401287 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:36.594025 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:36.594112 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:36.599591 1401287 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:36.599832 1401287 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:36.599913 1401287 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:36.682320 1401287 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:36.682523 1401287 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:37.683335 1401287 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001189529s
	I0926 22:29:37.687852 1401287 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:37.687994 1401287 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:29:37.688138 1401287 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:37.688267 1401287 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:38.693325 1401287 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005328653s
	I0926 22:29:39.818196 1401287 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.130304657s
	I0926 22:29:41.690178 1401287 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002189462s
	I0926 22:29:41.702527 1401287 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:41.711408 1401287 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:41.720193 1401287 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:41.720435 1401287 kubeadm.go:318] [mark-control-plane] Marking the node addons-619347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:41.727838 1401287 kubeadm.go:318] [bootstrap-token] Using token: ydwgpt.re3mhs2qr7yfu0od
	I0926 22:29:41.729412 1401287 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:41.729554 1401287 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:41.732328 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:41.737352 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:41.740726 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:41.743207 1401287 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:41.745363 1401287 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:42.096302 1401287 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:42.513166 1401287 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:43.094717 1401287 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:43.095522 1401287 kubeadm.go:318] 
	I0926 22:29:43.095627 1401287 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:43.095642 1401287 kubeadm.go:318] 
	I0926 22:29:43.095755 1401287 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:43.095774 1401287 kubeadm.go:318] 
	I0926 22:29:43.095814 1401287 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:43.095897 1401287 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:43.095977 1401287 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:43.095986 1401287 kubeadm.go:318] 
	I0926 22:29:43.096062 1401287 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:43.096071 1401287 kubeadm.go:318] 
	I0926 22:29:43.096135 1401287 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:43.096145 1401287 kubeadm.go:318] 
	I0926 22:29:43.096220 1401287 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:43.096324 1401287 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:43.096430 1401287 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:43.096455 1401287 kubeadm.go:318] 
	I0926 22:29:43.096638 1401287 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:43.096786 1401287 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:43.096798 1401287 kubeadm.go:318] 
	I0926 22:29:43.096919 1401287 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097088 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 \
	I0926 22:29:43.097115 1401287 kubeadm.go:318] 	--control-plane 
	I0926 22:29:43.097122 1401287 kubeadm.go:318] 
	I0926 22:29:43.097214 1401287 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:43.097228 1401287 kubeadm.go:318] 
	I0926 22:29:43.097348 1401287 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ydwgpt.re3mhs2qr7yfu0od \
	I0926 22:29:43.097470 1401287 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:bb03dd3d3cc4e0d1ed19743dc0135bcd735f974baaac927fcaff77cb8a636413 
	I0926 22:29:43.099587 1401287 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:29:43.099739 1401287 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:43.099768 1401287 cni.go:84] Creating CNI manager for ""
	I0926 22:29:43.099788 1401287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:29:43.101355 1401287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:43.102553 1401287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:43.112120 1401287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:43.130674 1401287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:43.130768 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.130767 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-619347 minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-619347 minikube.k8s.io/primary=true
	I0926 22:29:43.138720 1401287 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:43.217942 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:43.718375 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.218391 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:44.718337 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.219035 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:45.719000 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.218689 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:46.718531 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.218333 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.718316 1401287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.783783 1401287 kubeadm.go:1113] duration metric: took 4.653074895s to wait for elevateKubeSystemPrivileges
	I0926 22:29:47.783815 1401287 kubeadm.go:402] duration metric: took 14.965805729s to StartCluster
	I0926 22:29:47.783835 1401287 settings.go:142] acquiring lock: {Name:mk19bb20e8e2719c9f4ae7071ba1f293bea0c47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.783943 1401287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:29:47.784300 1401287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-1396392/kubeconfig: {Name:mk53eccd4814679d9dd1f60d4b668d1b7f9967e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:47.784499 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:47.784532 1401287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 22:29:47.784609 1401287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:47.784681 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784735 1401287 addons.go:69] Setting registry=true in profile "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784755 1401287 addons.go:69] Setting storage-provisioner=true in profile "addons-619347"
	I0926 22:29:47.784760 1401287 addons.go:238] Setting addon registry=true in "addons-619347"
	I0926 22:29:47.784746 1401287 addons.go:69] Setting registry-creds=true in profile "addons-619347"
	I0926 22:29:47.784770 1401287 addons.go:238] Setting addon storage-provisioner=true in "addons-619347"
	I0926 22:29:47.784775 1401287 addons.go:238] Setting addon registry-creds=true in "addons-619347"
	I0926 22:29:47.784785 1401287 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-619347"
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784811 1401287 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-619347"
	I0926 22:29:47.784804 1401287 addons.go:69] Setting inspektor-gadget=true in profile "addons-619347"
	I0926 22:29:47.784822 1401287 addons.go:69] Setting volumesnapshots=true in profile "addons-619347"
	I0926 22:29:47.784827 1401287 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-619347"
	I0926 22:29:47.784832 1401287 addons.go:238] Setting addon inspektor-gadget=true in "addons-619347"
	I0926 22:29:47.784833 1401287 addons.go:238] Setting addon volumesnapshots=true in "addons-619347"
	I0926 22:29:47.784844 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784849 1401287 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:47.784851 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784856 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784879 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784806 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784951 1401287 addons.go:69] Setting ingress-dns=true in profile "addons-619347"
	I0926 22:29:47.784970 1401287 addons.go:69] Setting default-storageclass=true in profile "addons-619347"
	I0926 22:29:47.784958 1401287 addons.go:69] Setting gcp-auth=true in profile "addons-619347"
	I0926 22:29:47.784988 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-619347"
	I0926 22:29:47.784817 1401287 addons.go:69] Setting volcano=true in profile "addons-619347"
	I0926 22:29:47.785003 1401287 addons.go:238] Setting addon volcano=true in "addons-619347"
	I0926 22:29:47.785032 1401287 addons.go:69] Setting cloud-spanner=true in profile "addons-619347"
	I0926 22:29:47.785040 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785045 1401287 addons.go:238] Setting addon cloud-spanner=true in "addons-619347"
	I0926 22:29:47.785065 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785262 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784800 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785379 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784973 1401287 addons.go:238] Setting addon ingress-dns=true in "addons-619347"
	I0926 22:29:47.785498 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785518 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785535 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785723 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785798 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785980 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.785350 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784992 1401287 mustload.go:65] Loading cluster: addons-619347
	I0926 22:29:47.784762 1401287 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-619347"
	I0926 22:29:47.787331 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.784792 1401287 addons.go:69] Setting metrics-server=true in profile "addons-619347"
	I0926 22:29:47.784734 1401287 addons.go:69] Setting yakd=true in profile "addons-619347"
	I0926 22:29:47.787078 1401287 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:47.785351 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787824 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.788010 1401287 addons.go:238] Setting addon metrics-server=true in "addons-619347"
	I0926 22:29:47.788028 1401287 addons.go:238] Setting addon yakd=true in "addons-619347"
	I0926 22:29:47.788047 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.788063 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.789412 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.787118 1401287 config.go:182] Loaded profile config "addons-619347": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:29:47.784734 1401287 addons.go:69] Setting ingress=true in profile "addons-619347"
	I0926 22:29:47.789936 1401287 addons.go:238] Setting addon ingress=true in "addons-619347"
	I0926 22:29:47.789980 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.784814 1401287 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-619347"
	I0926 22:29:47.790231 1401287 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-619347"
	I0926 22:29:47.790451 1401287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:47.793232 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.793847 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.802421 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.803014 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.835418 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0926 22:29:47.836021 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:47.839393 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:47.839421 1401287 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:47.840142 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.845675 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0926 22:29:47.849257 1401287 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0926 22:29:47.856053 1401287 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:47.858820 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0926 22:29:47.856545 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:47.858894 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.860040 1401287 addons.go:238] Setting addon default-storageclass=true in "addons-619347"
	I0926 22:29:47.860081 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.860516 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:47.860534 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:47.860630 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.866839 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.873854 1401287 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:47.875341 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:47.875365 1401287 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:47.875428 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.882655 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:47.882749 1401287 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:47.884700 1401287 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:47.885073 1401287 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:47.885418 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:47.885504 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.884703 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:47.887232 1401287 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:47.887315 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:47.887396 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.887247 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:47.889515 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:47.892008 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:47.893405 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:47.895131 1401287 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:47.896348 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:47.896370 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:47.896434 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.897311 1401287 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-619347"
	I0926 22:29:47.897358 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.898142 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:47.899126 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:47.900143 1401287 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:47.902104 1401287 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:47.902740 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:47.902755 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:47.902813 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.903595 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:47.903615 1401287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:47.903685 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.911178 1401287 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:47.912616 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:47.912637 1401287 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:47.912867 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.916927 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:47.918186 1401287 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:47.919909 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.920091 1401287 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:47.920106 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:47.920166 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.921441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:47.922745 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:47.923875 1401287 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:47.923890 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:47.923943 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.926937 1401287 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:47.927973 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:47.927993 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:47.928052 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.940536 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.942062 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0926 22:29:47.945122 1401287 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:47.946248 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:47.946273 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:47.946337 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.951570 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.958865 1401287 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:47.959859 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.960450 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.961755 1401287 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:47.965573 1401287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:47.965594 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:47.965659 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.966411 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976561 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.976622 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977107 1401287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:47.977106 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.977119 1401287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:47.977177 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:47.980224 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.984609 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.989681 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:47.990796 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:47.997697 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.997795 1401287 retry.go:31] will retry after 178.321817ms: ssh: handshake failed: EOF
	W0926 22:29:47.999217 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:47.999256 1401287 retry.go:31] will retry after 245.552991ms: ssh: handshake failed: EOF
	I0926 22:29:48.009280 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.011073 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.018912 1401287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:48.019331 1401287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:48.022191 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:48.027290 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	W0926 22:29:48.029295 1401287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:29:48.029402 1401287 retry.go:31] will retry after 284.652213ms: ssh: handshake failed: EOF
	I0926 22:29:48.076445 1401287 node_ready.go:35] waiting up to 6m0s for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081001 1401287 node_ready.go:49] node "addons-619347" is "Ready"
	I0926 22:29:48.081030 1401287 node_ready.go:38] duration metric: took 4.536047ms for node "addons-619347" to be "Ready" ...
	I0926 22:29:48.081059 1401287 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:48.081111 1401287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:48.140834 1401287 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.140859 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:48.162194 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:48.165548 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:48.168900 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:48.182428 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:48.188630 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:48.188700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:48.201257 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:48.201282 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:48.206272 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:48.206297 1401287 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:48.207662 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:48.207682 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:48.218223 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:48.220995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:48.226298 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:48.226321 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:48.226742 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:48.226761 1401287 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:48.262874 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:48.262908 1401287 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:48.275319 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:48.275353 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:48.291538 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:48.291571 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:48.310099 1401287 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.310124 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:48.326030 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:48.326056 1401287 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:48.326064 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:48.326081 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:48.368923 1401287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.368970 1401287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:48.377708 1401287 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:48.377782 1401287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:48.395824 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:48.409558 1401287 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:48.410568 1401287 api_server.go:72] duration metric: took 626.001878ms to wait for apiserver process to appear ...
	I0926 22:29:48.410598 1401287 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:29:48.410621 1401287 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:29:48.424990 1401287 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:29:48.427236 1401287 api_server.go:141] control plane version: v1.34.0
	I0926 22:29:48.427333 1401287 api_server.go:131] duration metric: took 16.7257ms to wait for apiserver health ...
	I0926 22:29:48.427359 1401287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:29:48.434147 1401287 system_pods.go:59] 7 kube-system pods found
	I0926 22:29:48.434185 1401287 system_pods.go:61] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434195 1401287 system_pods.go:61] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.434206 1401287 system_pods.go:61] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.434221 1401287 system_pods.go:61] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.434230 1401287 system_pods.go:61] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.434237 1401287 system_pods.go:61] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.434245 1401287 system_pods.go:61] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.434254 1401287 system_pods.go:74] duration metric: took 6.877162ms to wait for pod list to return data ...
	I0926 22:29:48.434265 1401287 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:29:48.437910 1401287 default_sa.go:45] found service account: "default"
	I0926 22:29:48.437986 1401287 default_sa.go:55] duration metric: took 3.713655ms for default service account to be created ...
	I0926 22:29:48.438009 1401287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:29:48.449749 1401287 system_pods.go:86] 7 kube-system pods found
	I0926 22:29:48.449859 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449883 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.449933 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.449956 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 22:29:48.449992 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.450028 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.450047 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.450113 1401287 retry.go:31] will retry after 220.911414ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.454420 1401287 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.454446 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:48.467995 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:48.486003 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:48.486043 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:48.505966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:48.506005 1401287 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:48.519158 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 22:29:48.533016 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:48.564879 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:48.613388 1401287 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.613410 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:48.638555 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:48.678611 1401287 system_pods.go:86] 8 kube-system pods found
	I0926 22:29:48.678647 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending
	I0926 22:29:48.678660 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678669 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:48.678691 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:48.678698 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:48.678709 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:48.678717 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:48.678724 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:48.678749 1401287 retry.go:31] will retry after 325.08055ms: missing components: kube-dns, kube-proxy
	I0926 22:29:48.694878 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:48.694910 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:48.717411 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:48.874966 1401287 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:48.875006 1401287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:48.915620 1401287 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-619347" context rescaled to 1 replicas
	I0926 22:29:48.947182 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:48.947278 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:49.013309 1401287 system_pods.go:86] 9 kube-system pods found
	I0926 22:29:49.013412 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.013424 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013461 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.013471 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.013525 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.013537 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.013546 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.013553 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.013560 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.013636 1401287 retry.go:31] will retry after 486.746944ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.102910 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:49.102950 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:49.259460 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:49.259504 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:49.377226 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:49.377250 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:49.493928 1401287 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:49.493968 1401287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:49.517924 1401287 system_pods.go:86] 14 kube-system pods found
	I0926 22:29:49.517990 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:49.518004 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518013 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:49.518022 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 22:29:49.518044 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:49.518055 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:49.518063 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:49.518072 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 22:29:49.518081 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:49.518100 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:49.518123 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:49.518143 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:49.518154 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:49.518165 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending
	I0926 22:29:49.518211 1401287 retry.go:31] will retry after 599.651697ms: missing components: kube-dns, kube-proxy
	I0926 22:29:49.625802 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:50.130675 1401287 system_pods.go:86] 15 kube-system pods found
	I0926 22:29:50.130828 1401287 system_pods.go:89] "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:29:50.130842 1401287 system_pods.go:89] "coredns-66bc5c9577-l8gdk" [274a2cdf-93eb-4503-807d-8f18887cfc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130854 1401287 system_pods.go:89] "coredns-66bc5c9577-qctdw" [79e3c602-4683-479a-a931-c6a4dbf6a202] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:29:50.130861 1401287 system_pods.go:89] "etcd-addons-619347" [fc9bdd1a-7f59-4f36-b45e-b947a144e6ad] Running
	I0926 22:29:50.130866 1401287 system_pods.go:89] "kube-apiserver-addons-619347" [42b981f5-1497-4776-928c-2e5d0dbaf309] Running
	I0926 22:29:50.130875 1401287 system_pods.go:89] "kube-controller-manager-addons-619347" [4a0482a6-c1fc-47d8-b777-66e61cbcce78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 22:29:50.130885 1401287 system_pods.go:89] "kube-ingress-dns-minikube" [67d5aed1-60ec-4253-955f-5b33c2d59118] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:29:50.130892 1401287 system_pods.go:89] "kube-proxy-sdscg" [3a54821c-0141-4976-ab37-84e6f1ab4883] Running
	I0926 22:29:50.130900 1401287 system_pods.go:89] "kube-scheduler-addons-619347" [819e02c0-b2b6-447a-983e-a768c2c004e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 22:29:50.130908 1401287 system_pods.go:89] "metrics-server-85b7d694d7-mjlqr" [18663e65-efc9-4e15-8dad-c4e23a7f7f18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:29:50.130924 1401287 system_pods.go:89] "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:29:50.130932 1401287 system_pods.go:89] "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:29:50.130942 1401287 system_pods.go:89] "registry-creds-764b6fb674-kjmd4" [70ab44b0-8ebe-4b65-831d-a4cc579401a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:29:50.130951 1401287 system_pods.go:89] "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:29:50.130958 1401287 system_pods.go:89] "storage-provisioner" [bd8557de-6ad0-4dd6-bcc3-184086181257] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:29:50.130969 1401287 system_pods.go:126] duration metric: took 1.692943423s to wait for k8s-apps to be running ...
	I0926 22:29:50.130981 1401287 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:29:50.131036 1401287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:29:50.228682 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.066443039s)
	I0926 22:29:50.228730 1401287 addons.go:479] Verifying addon ingress=true in "addons-619347"
	I0926 22:29:50.229183 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.06360117s)
	I0926 22:29:50.229277 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.06027927s)
	I0926 22:29:50.229386 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.046934043s)
	W0926 22:29:50.229417 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229439 1401287 retry.go:31] will retry after 244.753675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:50.229506 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040735105s)
	I0926 22:29:50.229590 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040703194s)
	I0926 22:29:50.229630 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.011384775s)
	I0926 22:29:50.229674 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00860092s)
	I0926 22:29:50.229967 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.834111385s)
	I0926 22:29:50.229990 1401287 addons.go:479] Verifying addon registry=true in "addons-619347"
	I0926 22:29:50.230454 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.762415616s)
	I0926 22:29:50.230635 1401287 addons.go:479] Verifying addon metrics-server=true in "addons-619347"
	I0926 22:29:50.230518 1401287 out.go:179] * Verifying ingress addon...
	I0926 22:29:50.233574 1401287 out.go:179] * Verifying registry addon...
	I0926 22:29:50.234496 1401287 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:29:50.236422 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:29:50.239932 1401287 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:29:50.239997 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.242126 1401287 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:29:50.242195 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:50.474912 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:50.747610 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:50.749841 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.178335 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.659134928s)
	I0926 22:29:51.178429 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.645380917s)
	I0926 22:29:51.178600 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.613538879s)
	I0926 22:29:51.178880 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.540232302s)
	I0926 22:29:51.179022 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461568485s)
	W0926 22:29:51.179054 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.179074 1401287 retry.go:31] will retry after 372.721698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:29:51.180773 1401287 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-619347 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:29:51.223913 1401287 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.092854415s)
	I0926 22:29:51.223952 1401287 system_svc.go:56] duration metric: took 1.092967022s WaitForService to wait for kubelet
	I0926 22:29:51.223963 1401287 kubeadm.go:586] duration metric: took 3.439402099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:51.223986 1401287 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:29:51.224342 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.598487819s)
	I0926 22:29:51.224378 1401287 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-619347"
	I0926 22:29:51.225939 1401287 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:29:51.228192 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:29:51.229798 1401287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:29:51.229833 1401287 node_conditions.go:123] node cpu capacity is 8
	I0926 22:29:51.229856 1401287 node_conditions.go:105] duration metric: took 5.863751ms to run NodePressure ...
	I0926 22:29:51.229880 1401287 start.go:241] waiting for startup goroutines ...
	I0926 22:29:51.234026 1401287 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:29:51.234047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.241936 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.243854 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.552700 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:51.709711 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.234742831s)
	W0926 22:29:51.709760 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.709786 1401287 retry.go:31] will retry after 268.370333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:51.732520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:51.738383 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:51.739361 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:51.978851 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:52.231665 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.237879 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.238844 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:52.731592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:52.738117 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:52.739055 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.232517 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.237333 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.239471 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:53.731711 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:53.737791 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:53.738851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.244329 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.691529274s)
	I0926 22:29:54.244428 1401287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.26554658s)
	W0926 22:29:54.244461 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.244491 1401287 retry.go:31] will retry after 392.451192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:54.303455 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.303472 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:54.303697 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.637695 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.732408 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:54.737348 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:54.738840 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:55.209616 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.209647 1401287 retry.go:31] will retry after 748.885115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:55.232030 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.238153 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.239111 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.331196 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:55.331261 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.348751 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.457803 1401287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:29:55.479373 1401287 addons.go:238] Setting addon gcp-auth=true in "addons-619347"
	I0926 22:29:55.479441 1401287 host.go:66] Checking if "addons-619347" exists ...
	I0926 22:29:55.479850 1401287 cli_runner.go:164] Run: docker container inspect addons-619347 --format={{.State.Status}}
	I0926 22:29:55.499515 1401287 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:29:55.499611 1401287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-619347
	I0926 22:29:55.520325 1401287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/addons-619347/id_rsa Username:docker}
	I0926 22:29:55.618144 1401287 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:55.619415 1401287 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:29:55.621107 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:29:55.621131 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:29:55.643383 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:29:55.643405 1401287 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:29:55.664765 1401287 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.664789 1401287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:29:55.685778 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:29:55.732904 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:55.737583 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:55.739755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:55.958754 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:56.145831 1401287 addons.go:479] Verifying addon gcp-auth=true in "addons-619347"
	I0926 22:29:56.147565 1401287 out.go:179] * Verifying gcp-auth addon...
	I0926 22:29:56.149656 1401287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:29:56.153451 1401287 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:29:56.153473 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.234575 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:56.238524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.240547 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.753812 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:56.754009 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:56.754105 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:56.754175 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:29:56.846438 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:56.846489 1401287 retry.go:31] will retry after 1.306898572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:57.154380 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.257757 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:57.257867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.257914 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.653373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:57.731799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:57.738612 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:57.739139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.153929 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:58.154158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.231698 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.238196 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.239871 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:58.653423 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:58.732047 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:58.737700 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:58.739381 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:29:58.876131 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:58.876169 1401287 retry.go:31] will retry after 1.510195391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:29:59.153627 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.231973 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.237626 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.239442 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:29:59.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:29:59.732199 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:29:59.737381 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:29:59.739318 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.154349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.234946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.237553 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.238970 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:00.387250 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.653371 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:00.754562 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:00.754718 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:00.754737 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:01.142390 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.142433 1401287 retry.go:31] will retry after 2.823589735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:01.153470 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.231864 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.238191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.238929 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:01.653817 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:01.732601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:01.738292 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:01.738765 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.153510 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.232061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:02.237606 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.239333 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.653691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:02.785100 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.785181 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.785282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.228531 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.231398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.237322 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.239087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.653658 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:03.754788 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.754892 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.754903 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.966722 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.154061 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:04.232281 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.240238 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.653129 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:04.657965 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.657997 1401287 retry.go:31] will retry after 3.931075545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.732441 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.738568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.739156 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.153676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.231619 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.237952 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.238902 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.653858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:05.732363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.737932 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.739708 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.153005 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.232588 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.238508 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.238930 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.653625 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.732133 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.737660 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.739398 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.153662 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.231544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.238376 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.238896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.653623 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.732168 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.737693 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.739572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.153679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.231882 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.237268 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.239112 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.589607 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:08.653128 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.732858 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.737867 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.739211 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.153590 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.232224 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.237615 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.239714 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0926 22:30:09.284897 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.284936 1401287 retry.go:31] will retry after 5.203674911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:09.653321 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.731879 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.737435 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.739163 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.153976 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.232225 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.237891 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.239799 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.652648 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.732289 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.740552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.740620 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.153709 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.231772 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.237915 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.238911 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.653574 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.731464 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.737883 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.738742 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.154161 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.255109 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.255143 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.255266 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.653341 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.732278 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.737987 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.739675 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.152601 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.231735 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.238458 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.238993 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.653963 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.732677 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.737942 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.738815 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.153349 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.231707 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.238128 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.238724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.489029 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.654034 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.755687 1401287 kapi.go:107] duration metric: took 24.519261155s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:14.755725 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.755952 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.152792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:15.222551 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.222596 1401287 retry.go:31] will retry after 5.506436948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.231403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.237852 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.662260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.731552 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.738097 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.154099 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.231851 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.237284 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.653593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.732118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.737657 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.153191 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.232638 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.238260 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.654087 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.732572 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.737869 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.153497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.231724 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.237938 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.653474 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.754180 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.754664 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.153672 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.231937 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.237429 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.653500 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.732332 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.737902 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.153193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.231558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.238229 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.653596 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.729807 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:20.755463 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.755497 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.156185 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.232540 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.237339 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:21.506242 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.506283 1401287 retry.go:31] will retry after 16.573257161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.653673 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.746511 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.747024 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.154193 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.255191 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.255336 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.653679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.732249 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.765524 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.153260 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.232592 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.237546 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.653954 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.732247 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.738249 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.153348 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.231679 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.238206 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.653640 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.754172 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.754291 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.155071 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.232312 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.237762 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.654098 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.755772 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.756117 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.153020 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.232253 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.237493 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.653784 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.731755 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.738149 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.153957 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.231912 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.237304 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.740418 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.740422 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.740489 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.153035 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.232351 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.253652 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.653198 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.732594 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.738617 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.153818 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.255363 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.255402 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.653377 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.795403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.795568 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.154437 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.255203 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.255255 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.654322 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.731875 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.738025 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.153152 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.232403 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.237980 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.686139 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.732196 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.737642 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.153176 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.232567 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.238193 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.653520 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.731607 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.738120 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.153329 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.231836 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.238090 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.653138 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.753505 1401287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.753695 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.153545 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.232120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.237425 1401287 kapi.go:107] duration metric: took 44.002941806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:30:34.654015 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.732058 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.153560 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.232023 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.653149 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.733392 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.195661 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.294162 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.653726 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.732044 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.153456 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.231729 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.653114 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.732251 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.080636 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:38.154372 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.231375 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.653809 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.782691 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0926 22:30:38.852949 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.852986 1401287 retry.go:31] will retry after 15.881899723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:39.153131 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.232352 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.653465 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.731259 1401287 kapi.go:107] duration metric: took 48.503064069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:30:40.153304 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.652405 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.153555 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.652676 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.152544 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.653090 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.153739 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.652905 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.153461 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.653397 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.652913 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.153414 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.652678 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.153158 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.653282 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.152600 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.652859 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.153593 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.652792 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.152790 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.652641 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.153977 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.653558 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.153042 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.653062 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.153284 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.653232 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.153389 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.653118 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.735407 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:55.153085 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:55.342933 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.342967 1401287 retry.go:31] will retry after 26.788650375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:55.653379 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.153887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.653069 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.153833 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.653088 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.153701 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.653075 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.153896 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.652981 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.152946 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.653566 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.152984 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.653887 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.153373 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.654120 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.153468 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.653248 1401287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.153804 1401287 kapi.go:107] duration metric: took 1m8.004150077s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:04.155559 1401287 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-619347 cluster.
	I0926 22:31:04.156826 1401287 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:04.158107 1401287 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:22.132659 1401287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:22.704256 1401287 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0926 22:31:22.704391 1401287 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:22.706313 1401287 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, volcano, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0926 22:31:22.707981 1401287 addons.go:514] duration metric: took 1m34.923379678s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass volcano registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0926 22:31:22.708039 1401287 start.go:246] waiting for cluster config update ...
	I0926 22:31:22.708063 1401287 start.go:255] writing updated cluster config ...
	I0926 22:31:22.708371 1401287 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:22.712517 1401287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:22.716253 1401287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.720372 1401287 pod_ready.go:94] pod "coredns-66bc5c9577-qctdw" is "Ready"
	I0926 22:31:22.720398 1401287 pod_ready.go:86] duration metric: took 4.121653ms for pod "coredns-66bc5c9577-qctdw" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.722139 1401287 pod_ready.go:83] waiting for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.725796 1401287 pod_ready.go:94] pod "etcd-addons-619347" is "Ready"
	I0926 22:31:22.725814 1401287 pod_ready.go:86] duration metric: took 3.654877ms for pod "etcd-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.727751 1401287 pod_ready.go:83] waiting for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.731230 1401287 pod_ready.go:94] pod "kube-apiserver-addons-619347" is "Ready"
	I0926 22:31:22.731252 1401287 pod_ready.go:86] duration metric: took 3.484052ms for pod "kube-apiserver-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:22.733085 1401287 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.117180 1401287 pod_ready.go:94] pod "kube-controller-manager-addons-619347" is "Ready"
	I0926 22:31:23.117210 1401287 pod_ready.go:86] duration metric: took 384.107267ms for pod "kube-controller-manager-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.316538 1401287 pod_ready.go:83] waiting for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.716914 1401287 pod_ready.go:94] pod "kube-proxy-sdscg" is "Ready"
	I0926 22:31:23.716945 1401287 pod_ready.go:86] duration metric: took 400.37971ms for pod "kube-proxy-sdscg" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:23.917057 1401287 pod_ready.go:83] waiting for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316600 1401287 pod_ready.go:94] pod "kube-scheduler-addons-619347" is "Ready"
	I0926 22:31:24.316631 1401287 pod_ready.go:86] duration metric: took 399.543309ms for pod "kube-scheduler-addons-619347" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:24.316645 1401287 pod_ready.go:40] duration metric: took 1.604097264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:24.363816 1401287 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:24.365720 1401287 out.go:179] * Done! kubectl is now configured to use "addons-619347" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 26 22:36:00 addons-619347 dockerd[1116]: time="2025-09-26T22:36:00.400795529Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:36:11 addons-619347 dockerd[1116]: time="2025-09-26T22:36:11.425105161Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:36:24 addons-619347 dockerd[1116]: time="2025-09-26T22:36:24.453011041Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:36:52 addons-619347 dockerd[1116]: time="2025-09-26T22:36:52.331897314Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:36:52 addons-619347 dockerd[1116]: time="2025-09-26T22:36:52.362343516Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:36:53 addons-619347 dockerd[1116]: time="2025-09-26T22:36:53.408061616Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:37:16 addons-619347 dockerd[1116]: time="2025-09-26T22:37:16.435355240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:37:17 addons-619347 dockerd[1116]: time="2025-09-26T22:37:17.988734039Z" level=info msg="ignoring event" container=80b5b413df3ef37b060ecb1d6d2d6d9b97016f7fa126df645c819a23e7d06dc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:37:33 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:37:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67932f7f901a55b118d183f4f628a937190e9c1ce489d6dfa9182a92804e46ec/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:37:33 addons-619347 dockerd[1116]: time="2025-09-26T22:37:33.389045626Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:37:33 addons-619347 dockerd[1116]: time="2025-09-26T22:37:33.420898973Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:37:46 addons-619347 dockerd[1116]: time="2025-09-26T22:37:46.332274758Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:37:46 addons-619347 dockerd[1116]: time="2025-09-26T22:37:46.364444562Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:08 addons-619347 dockerd[1116]: time="2025-09-26T22:38:08.332232521Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:38:08 addons-619347 dockerd[1116]: time="2025-09-26T22:38:08.421584413Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:08 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:38:08Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 26 22:38:14 addons-619347 dockerd[1116]: time="2025-09-26T22:38:14.410829521Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:42 addons-619347 dockerd[1116]: time="2025-09-26T22:38:42.448587681Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:38:52 addons-619347 dockerd[1116]: time="2025-09-26T22:38:52.328938208Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:38:52 addons-619347 dockerd[1116]: time="2025-09-26T22:38:52.362556615Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:39:33 addons-619347 dockerd[1116]: time="2025-09-26T22:39:33.432864462Z" level=info msg="ignoring event" container=67932f7f901a55b118d183f4f628a937190e9c1ce489d6dfa9182a92804e46ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:40:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:40:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02ed02bee6d73780bddc06cb8b6a6b9f7bca62787f463b8a37a9607797b22ec3/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:40:03 addons-619347 dockerd[1116]: time="2025-09-26T22:40:03.827570082Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:03 addons-619347 dockerd[1116]: time="2025-09-26T22:40:03.951061919Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:40:03 addons-619347 cri-dockerd[1422]: time="2025-09-26T22:40:03Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	68f3619046214       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   a60bbae2dab32       busybox
	ce8cf08b141fd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	7dcfe799d3773       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          9 minutes ago       Running             csi-provisioner                          0                   de2617410b653       csi-hostpathplugin-rbzvs
	931b17716c09b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            9 minutes ago       Running             liveness-probe                           0                   de2617410b653       csi-hostpathplugin-rbzvs
	7d61fc01cddfd       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           9 minutes ago       Running             hostpath                                 0                   de2617410b653       csi-hostpathplugin-rbzvs
	1dafa88bf03ff       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                9 minutes ago       Running             node-driver-registrar                    0                   de2617410b653       csi-hostpathplugin-rbzvs
	728e0cf65646d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   db768efcc91e0       ingress-nginx-controller-9cc49f96f-ghq9n
	4830d4a0f03bf       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   b9ce75482df7a       csi-hostpath-resizer-0
	2272adc16d5b8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   de2617410b653       csi-hostpathplugin-rbzvs
	e37820c539b12       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             9 minutes ago       Running             csi-attacher                             0                   84987d9e1a070       csi-hostpath-attacher-0
	4cc3707d46bf8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   3dd9df25ed9e8       snapshot-controller-7d9fbc56b8-2zg9l
	d91e1c6dab5ec       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   f5d5e2661efee       snapshot-controller-7d9fbc56b8-ml295
	64e745dd36107       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              create                                   0                   8c9898018e8fa       ingress-nginx-admission-create-dbtd8
	2be186df9d067       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              patch                                    0                   e4cb125881f09       ingress-nginx-admission-patch-65dgz
	8225f70c79655       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       9 minutes ago       Running             local-path-provisioner                   0                   287d670c65c8c       local-path-provisioner-648f6765c9-mgt7q
	a6d48b6dd738f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            10 minutes ago      Running             gadget                                   0                   1e350b656bd65       gadget-9rfhl
	6c95150654506       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         10 minutes ago      Running             minikube-ingress-dns                     0                   8f1cf5e8da338       kube-ingress-dns-minikube
	d9822a41079f6       6e38f40d628db                                                                                                                                10 minutes ago      Running             storage-provisioner                      0                   e7dd4d41d742b       storage-provisioner
	9ea233eb6b299       52546a367cc9e                                                                                                                                10 minutes ago      Running             coredns                                  0                   3dff0fbc29922       coredns-66bc5c9577-qctdw
	227d066a100ce       df0860106674d                                                                                                                                10 minutes ago      Running             kube-proxy                               0                   9cd7f6237aa02       kube-proxy-sdscg
	f5b2050f68de5       a0af72f2ec6d6                                                                                                                                10 minutes ago      Running             kube-controller-manager                  0                   fbe20fd4325ef       kube-controller-manager-addons-619347
	8209664c099ee       46169d968e920                                                                                                                                10 minutes ago      Running             kube-scheduler                           0                   779a1e971ca62       kube-scheduler-addons-619347
	9d1b130b03b02       90550c43ad2bc                                                                                                                                10 minutes ago      Running             kube-apiserver                           0                   f2516f75f5542       kube-apiserver-addons-619347
	5ae0da6e5bfbf       5f1f5298c888d                                                                                                                                10 minutes ago      Running             etcd                                     0                   fa78f9e958055       etcd-addons-619347
	
	
	==> controller_ingress [728e0cf65646] <==
	I0926 22:30:34.887711       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f6bd66b9-f1c6-476b-a596-e7c7ed771583", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0926 22:30:36.082816       7 nginx.go:319] "Starting NGINX process"
	I0926 22:30:36.082933       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0926 22:30:36.083217       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0926 22:30:36.083836       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:30:36.090331       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0926 22:30:36.090382       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-ghq9n"
	I0926 22:30:36.093730       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	I0926 22:30:36.132936       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:30:36.133071       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0926 22:30:36.133167       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0926 22:30:36.196634       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-ghq9n" node="addons-619347"
	W0926 22:35:31.376780       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0926 22:35:31.378855       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0926 22:35:31.381841       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0926 22:35:31.382122       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:32.307592       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:32.308416       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0926 22:35:32.351887       7 controller.go:228] "Backend successfully reloaded"
	I0926 22:35:32.352086       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-ghq9n", UID:"ce9ba75b-f03c-4081-b6c3-12af26a48c26", APIVersion:"v1", ResourceVersion:"1265", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0926 22:35:35.641811       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0926 22:35:36.097517       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0926 22:35:36.101827       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"49217b09-005f-4368-a333-dd023eb3d6ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2279", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0926 22:35:38.975235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0926 22:35:42.307868       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [9ea233eb6b29] <==
	[INFO] 10.244.0.8:44508 - 52121 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000116925s
	[INFO] 10.244.0.8:40588 - 27017 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090303s
	[INFO] 10.244.0.8:40588 - 26695 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000142751s
	[INFO] 10.244.0.8:32780 - 27322 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000091822s
	[INFO] 10.244.0.8:32780 - 26988 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130714s
	[INFO] 10.244.0.8:34268 - 17213 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132338s
	[INFO] 10.244.0.8:34268 - 16970 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009144s
	[INFO] 10.244.0.27:32935 - 45410 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000327431s
	[INFO] 10.244.0.27:49406 - 23181 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000412135s
	[INFO] 10.244.0.27:42691 - 10663 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129221s
	[INFO] 10.244.0.27:49167 - 28887 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157287s
	[INFO] 10.244.0.27:40544 - 36384 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160696s
	[INFO] 10.244.0.27:45145 - 3022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123636s
	[INFO] 10.244.0.27:57336 - 33875 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00499531s
	[INFO] 10.244.0.27:41391 - 16202 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005792959s
	[INFO] 10.244.0.27:59854 - 59303 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005004398s
	[INFO] 10.244.0.27:34824 - 56259 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005925015s
	[INFO] 10.244.0.27:36869 - 29305 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004734879s
	[INFO] 10.244.0.27:45437 - 987 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00498032s
	[INFO] 10.244.0.27:47010 - 60828 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005607554s
	[INFO] 10.244.0.27:46662 - 45152 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007088447s
	[INFO] 10.244.0.27:60306 - 17345 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000925116s
	[INFO] 10.244.0.27:50259 - 39178 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001983867s
	[INFO] 10.244.0.32:60236 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345556s
	[INFO] 10.244.0.32:41492 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202476s
	
	
	==> describe nodes <==
	Name:               addons-619347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-619347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-619347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-619347
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-619347"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-619347
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:40:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:39:54 +0000   Fri, 26 Sep 2025 22:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-619347
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 0728f6ac4f7f4421b7f9eeb1f21a8502
	  System UUID:                bfe74e22-ee1d-47b3-9c54-c1f6ef287d9d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  gadget                      gadget-9rfhl                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-ghq9n                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-qctdw                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-rbzvs                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-619347                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-619347                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-619347                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sdscg                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-619347                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-2zg9l                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-ml295                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-mgt7q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-619347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-619347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-619347 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node addons-619347 event: Registered Node addons-619347 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 b0 60 62 2a 0f 08 06
	[  +2.140079] IPv4: martian source 10.244.0.8 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.023792] IPv4: martian source 10.244.0.8 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[  +1.345643] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	
	
	==> etcd [5ae0da6e5bfb] <==
	{"level":"warn","ts":"2025-09-26T22:29:51.882030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:29:56.750352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.699546ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:29:56.750520Z","caller":"traceutil/trace.go:172","msg":"trace[545120805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"124.835239ms","start":"2025-09-26T22:29:56.625613Z","end":"2025-09-26T22:29:56.750448Z","steps":["trace[545120805] 'range keys from in-memory index tree'  (duration: 124.657379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:29:56.750545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.950667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040237519390281 > lease_revoke:<id:70cc9988256cc037>","response":"size:29"}
	{"level":"info","ts":"2025-09-26T22:29:56.750622Z","caller":"traceutil/trace.go:172","msg":"trace[550957012] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"110.947341ms","start":"2025-09-26T22:29:56.639663Z","end":"2025-09-26T22:29:56.750610Z","steps":["trace[550957012] 'read index received'  (duration: 40.919µs)","trace[550957012] 'applied index is now lower than readState.Index'  (duration: 110.905488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:29:56.750818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.149289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/gadget/gadget-role\" limit:1 ","response":"range_response_count:1 size:929"}
	{"level":"info","ts":"2025-09-26T22:29:56.750855Z","caller":"traceutil/trace.go:172","msg":"trace[1241908482] range","detail":"{range_begin:/registry/roles/gadget/gadget-role; range_end:; response_count:1; response_revision:1029; }","duration":"111.19346ms","start":"2025-09-26T22:29:56.639653Z","end":"2025-09-26T22:29:56.750846Z","steps":["trace[1241908482] 'agreement among raft nodes before linearized reading'  (duration: 111.040998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:11.365351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T22:30:11.365445Z","caller":"traceutil/trace.go:172","msg":"trace[2098668277] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1075; }","duration":"101.279805ms","start":"2025-09-26T22:30:11.264150Z","end":"2025-09-26T22:30:11.365430Z","steps":["trace[2098668277] 'range keys from in-memory index tree'  (duration: 101.005862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:16.821834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.870731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.885825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.893998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.925127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.935713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.946969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.959548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.971710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.978983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.988574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:16.999879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:30:33.500976Z","caller":"traceutil/trace.go:172","msg":"trace[381421690] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"106.589616ms","start":"2025-09-26T22:30:33.394366Z","end":"2025-09-26T22:30:33.500955Z","steps":["trace[381421690] 'process raft request'  (duration: 106.446725ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:39:38.933371Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1764}
	{"level":"info","ts":"2025-09-26T22:39:38.967116Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1764,"took":"33.118081ms","hash":3118520929,"current-db-size-bytes":9007104,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":6070272,"current-db-size-in-use":"6.1 MB"}
	{"level":"info","ts":"2025-09-26T22:39:38.967158Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3118520929,"revision":1764,"compact-revision":-1}
	
	
	==> kernel <==
	 22:40:18 up  4:22,  0 users,  load average: 3.00, 1.79, 1.66
	Linux addons-619347 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9d1b130b03b0] <==
	I0926 22:34:42.944290       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0926 22:34:43.573127       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	W0926 22:34:43.772031       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0926 22:34:43.798029       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0926 22:34:43.851264       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0926 22:34:43.878591       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0926 22:34:43.906988       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0926 22:34:43.944869       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0926 22:34:44.127664       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0926 22:34:45.317063       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0926 22:35:01.714574       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41168: use of closed network connection
	E0926 22:35:01.904835       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41184: use of closed network connection
	I0926 22:35:11.393202       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.151.186"}
	I0926 22:35:31.379659       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0926 22:35:31.551505       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.109.176"}
	I0926 22:35:49.845467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:14.663730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:27.796093       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0926 22:37:00.802682       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:18.863268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:08.393742       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:27.210526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:17.734882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:39.792776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:39:49.815861       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f5b2050f68de] <==
	E0926 22:39:15.278587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:16.713241       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:16.714288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:32.635812       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:32.636815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:39.121594       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:39.122523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:40.577256       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:40.578226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:44.012974       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:44.013882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:46.192552       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:46.193614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:48.431737       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:48.432775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:57.858580       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:57.859711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:04.128014       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:04.129018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:06.866946       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:06.867972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:10.755019       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:10.756075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:17.528893       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:17.529946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [227d066a100c] <==
	I0926 22:29:48.632051       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:29:48.823798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:48.926913       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:48.926974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:29:48.927216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:48.966553       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:29:48.966624       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:48.976081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:48.977337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:48.977360       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:48.983888       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:48.983916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:48.984026       1 config.go:200] "Starting service config controller"
	I0926 22:29:48.984052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:48.984116       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:48.984123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:48.987589       1 config.go:309] "Starting node config controller"
	I0926 22:29:48.987610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:48.987619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:49.084696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:29:49.084764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:49.085094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8209664c099e] <==
	E0926 22:29:39.815534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:39.815639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:39.815741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:39.815842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:39.815881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:39.815924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:39.815978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:39.816083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:39.816079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:39.816116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:39.816205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:29:39.816287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:39.816442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:39.816526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.634465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:40.655555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:29:40.682056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:40.739683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:40.750044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:29:40.783186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:40.869968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:40.950295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:41.003301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:29:41.010326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0926 22:29:41.412472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:39:33 addons-619347 kubelet[2321]: I0926 22:39:33.599896    2321 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e88054c-f97a-404e-8bd1-f6294a8c32cc-kube-api-access-ggjth" (OuterVolumeSpecName: "kube-api-access-ggjth") pod "4e88054c-f97a-404e-8bd1-f6294a8c32cc" (UID: "4e88054c-f97a-404e-8bd1-f6294a8c32cc"). InnerVolumeSpecName "kube-api-access-ggjth". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 26 22:39:33 addons-619347 kubelet[2321]: I0926 22:39:33.698122    2321 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4e88054c-f97a-404e-8bd1-f6294a8c32cc-script\") on node \"addons-619347\" DevicePath \"\""
	Sep 26 22:39:33 addons-619347 kubelet[2321]: I0926 22:39:33.698158    2321 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4e88054c-f97a-404e-8bd1-f6294a8c32cc-data\") on node \"addons-619347\" DevicePath \"\""
	Sep 26 22:39:33 addons-619347 kubelet[2321]: I0926 22:39:33.698169    2321 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ggjth\" (UniqueName: \"kubernetes.io/projected/4e88054c-f97a-404e-8bd1-f6294a8c32cc-kube-api-access-ggjth\") on node \"addons-619347\" DevicePath \"\""
	Sep 26 22:39:34 addons-619347 kubelet[2321]: I0926 22:39:34.322989    2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e88054c-f97a-404e-8bd1-f6294a8c32cc" path="/var/lib/kubelet/pods/4e88054c-f97a-404e-8bd1-f6294a8c32cc/volumes"
	Sep 26 22:39:34 addons-619347 kubelet[2321]: E0926 22:39:34.325125    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:39:45 addons-619347 kubelet[2321]: E0926 22:39:45.312693    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:39:48 addons-619347 kubelet[2321]: E0926 22:39:48.310149    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:39:56 addons-619347 kubelet[2321]: E0926 22:39:56.312755    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:40:00 addons-619347 kubelet[2321]: E0926 22:40:00.319225    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: I0926 22:40:03.400335    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/42a58e94-a5df-4416-8bd6-cec6fbf7de16-script\") pod \"helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706\" (UID: \"42a58e94-a5df-4416-8bd6-cec6fbf7de16\") " pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: I0926 22:40:03.400413    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/42a58e94-a5df-4416-8bd6-cec6fbf7de16-data\") pod \"helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706\" (UID: \"42a58e94-a5df-4416-8bd6-cec6fbf7de16\") " pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: I0926 22:40:03.400447    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvxs\" (UniqueName: \"kubernetes.io/projected/42a58e94-a5df-4416-8bd6-cec6fbf7de16-kube-api-access-plvxs\") pod \"helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706\" (UID: \"42a58e94-a5df-4416-8bd6-cec6fbf7de16\") " pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: E0926 22:40:03.952887    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: E0926 22:40:03.952938    2321 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: E0926 22:40:03.953029    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706_local-path-storage(42a58e94-a5df-4416-8bd6-cec6fbf7de16): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:40:03 addons-619347 kubelet[2321]: E0926 22:40:03.953066    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706" podUID="42a58e94-a5df-4416-8bd6-cec6fbf7de16"
	Sep 26 22:40:04 addons-619347 kubelet[2321]: E0926 22:40:04.537367    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706" podUID="42a58e94-a5df-4416-8bd6-cec6fbf7de16"
	Sep 26 22:40:07 addons-619347 kubelet[2321]: I0926 22:40:07.309756    2321 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:40:08 addons-619347 kubelet[2321]: E0926 22:40:08.320911    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b8c0a4b7-2df5-4ced-ab25-28c6abfa74d7"
	Sep 26 22:40:13 addons-619347 kubelet[2321]: E0926 22:40:13.310371    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e055794c-8563-455d-956f-81e9b7627d09"
	Sep 26 22:40:18 addons-619347 kubelet[2321]: E0926 22:40:18.370274    2321 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:18 addons-619347 kubelet[2321]: E0926 22:40:18.370335    2321 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 26 22:40:18 addons-619347 kubelet[2321]: E0926 22:40:18.370442    2321 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706_local-path-storage(42a58e94-a5df-4416-8bd6-cec6fbf7de16): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:40:18 addons-619347 kubelet[2321]: E0926 22:40:18.370497    2321 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706" podUID="42a58e94-a5df-4416-8bd6-cec6fbf7de16"
	
	
	==> storage-provisioner [d9822a41079f] <==
	W0926 22:39:52.903019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:54.907038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:54.911459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:56.915237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:56.919195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:58.923286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:58.928718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:00.932094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:00.935950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:02.939201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:02.944558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:04.947910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:04.951794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:06.954233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:06.958799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:08.961463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:08.966017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:10.968682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:10.972285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:12.975959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:12.981221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:14.985736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:14.991264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:16.994190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:16.997887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-619347 -n addons-619347
helpers_test.go:269: (dbg) Run:  kubectl --context addons-619347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706: exit status 1 (78.377597ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq742 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jq742:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m48s                 default-scheduler  Successfully assigned default/nginx to addons-619347
	  Normal   Pulling    2m5s (x5 over 4m47s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m5s (x5 over 4m47s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m5s (x5 over 4m47s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x15 over 4m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x19 over 4m47s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-619347/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:35:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xgkr7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xgkr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m35s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-619347
	  Warning  Failed     4m34s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    97s (x5 over 4m35s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     97s (x5 over 4m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     97s (x4 over 4m19s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s (x15 over 4m34s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x18 over 4m34s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfq8j (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sfq8j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dbtd8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-65dgz" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-619347 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-dbtd8 ingress-nginx-admission-patch-65dgz helper-pod-create-pvc-5cf2a3fe-f531-4fd4-966d-620256eb3706: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.666744115s)
--- FAIL: TestAddons/parallel/LocalPath (344.74s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618103 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618103 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618103 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618103 --alsologtostderr -v=1] stderr:
I0926 22:53:43.063954 1470462 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:43.064207 1470462 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:43.064216 1470462 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:43.064221 1470462 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:43.064463 1470462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:43.064822 1470462 mustload.go:65] Loading cluster: functional-618103
I0926 22:53:43.065185 1470462 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:43.065623 1470462 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:43.083846 1470462 host.go:66] Checking if "functional-618103" exists ...
I0926 22:53:43.084195 1470462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0926 22:53:43.138174 1470462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:43.128112169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0926 22:53:43.138301 1470462 api_server.go:166] Checking apiserver status ...
I0926 22:53:43.138347 1470462 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0926 22:53:43.138383 1470462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:43.156189 1470462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:43.256866 1470462 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8955/cgroup
W0926 22:53:43.267034 1470462 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8955/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0926 22:53:43.267101 1470462 ssh_runner.go:195] Run: ls
I0926 22:53:43.270946 1470462 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0926 22:53:43.275171 1470462 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0926 22:53:43.275212 1470462 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0926 22:53:43.275359 1470462 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:43.275375 1470462 addons.go:69] Setting dashboard=true in profile "functional-618103"
I0926 22:53:43.275386 1470462 addons.go:238] Setting addon dashboard=true in "functional-618103"
I0926 22:53:43.275421 1470462 host.go:66] Checking if "functional-618103" exists ...
I0926 22:53:43.275764 1470462 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:43.295061 1470462 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0926 22:53:43.296356 1470462 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0926 22:53:43.297781 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0926 22:53:43.297798 1470462 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0926 22:53:43.297858 1470462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:43.315514 1470462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:43.423075 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0926 22:53:43.423101 1470462 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0926 22:53:43.442373 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0926 22:53:43.442396 1470462 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0926 22:53:43.460686 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0926 22:53:43.460713 1470462 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0926 22:53:43.480150 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0926 22:53:43.480174 1470462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0926 22:53:43.498269 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0926 22:53:43.498297 1470462 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0926 22:53:43.517383 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0926 22:53:43.517407 1470462 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0926 22:53:43.535649 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0926 22:53:43.535677 1470462 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0926 22:53:43.554752 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0926 22:53:43.554781 1470462 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0926 22:53:43.574258 1470462 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:53:43.574288 1470462 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0926 22:53:43.592826 1470462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:53:44.015788 1470462 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-618103 addons enable metrics-server

                                                
                                                
I0926 22:53:44.016790 1470462 addons.go:201] Writing out "functional-618103" config to set dashboard=true...
W0926 22:53:44.017024 1470462 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0926 22:53:44.017706 1470462 kapi.go:59] client config for functional-618103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.key", CAFile:"/home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0926 22:53:44.018180 1470462 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0926 22:53:44.018199 1470462 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0926 22:53:44.018203 1470462 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0926 22:53:44.018209 1470462 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0926 22:53:44.018212 1470462 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0926 22:53:44.025775 1470462 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  7f7409a8-f201-423b-8cef-b9c58f8a1b85 1408 0 2025-09-26 22:53:43 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-26 22:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.146.163,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.146.163],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0926 22:53:44.025920 1470462 out.go:285] * Launching proxy ...
* Launching proxy ...
I0926 22:53:44.026007 1470462 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-618103 proxy --port 36195]
I0926 22:53:44.026279 1470462 dashboard.go:157] Waiting for kubectl to output host:port ...
I0926 22:53:44.070585 1470462 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0926 22:53:44.070639 1470462 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0926 22:53:44.078239 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97b01744-3538-4ed9-ac94-e9c44ea89238] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b08e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033e3c0 TLS:<nil>}
I0926 22:53:44.078342 1470462 retry.go:31] will retry after 147.451µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.081537 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73e9baba-5e69-4d3c-8b8b-12c8d2c20325] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038c900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033e500 TLS:<nil>}
I0926 22:53:44.081598 1470462 retry.go:31] will retry after 151.374µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.084726 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b3bf0cf-9306-44cb-9f9f-b394e258d9de] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc0002517c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I0926 22:53:44.084772 1470462 retry.go:31] will retry after 119.123µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.087779 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6d545a2-0e83-4f8c-bd23-93337235bba3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038cb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I0926 22:53:44.087829 1470462 retry.go:31] will retry after 214.616µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.090752 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8f68403-98bb-46bc-ab88-5b2be99b2495] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000251900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I0926 22:53:44.090785 1470462 retry.go:31] will retry after 443.868µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.093860 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[333a462b-eff4-4606-9377-a633bd257202] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b08f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf540 TLS:<nil>}
I0926 22:53:44.093903 1470462 retry.go:31] will retry after 1.007816ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.096876 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1497ebfb-cfbe-4f46-928e-f990200700a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038cd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033e640 TLS:<nil>}
I0926 22:53:44.096916 1470462 retry.go:31] will retry after 981.853µs: Temporary Error: unexpected response code: 503
I0926 22:53:44.100105 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b7007ed-c7bc-424e-a304-1b97283bfc12] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000251a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I0926 22:53:44.100149 1470462 retry.go:31] will retry after 2.131657ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.105911 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e0fd3b19-df63-494d-95ce-aa0c3e0333cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038d200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf680 TLS:<nil>}
I0926 22:53:44.105967 1470462 retry.go:31] will retry after 3.048656ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.111433 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1cd55179-d644-43f1-a059-c435c5c2c5b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000251cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033e780 TLS:<nil>}
I0926 22:53:44.111500 1470462 retry.go:31] will retry after 2.732808ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.117959 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30f37e7d-dd6c-41ed-8dc9-e18c7bde26c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038d3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I0926 22:53:44.118000 1470462 retry.go:31] will retry after 5.473431ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.126185 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0413b47f-82ec-4760-b134-69d03367b62c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b09080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I0926 22:53:44.126227 1470462 retry.go:31] will retry after 5.088111ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.134339 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be99a76d-5abd-4791-85cf-44ec9d12be1a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b09140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033e8c0 TLS:<nil>}
I0926 22:53:44.134399 1470462 retry.go:31] will retry after 15.65263ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.153310 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e0189137-7b86-4844-b723-37a88cc3d382] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc00038de40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033ea00 TLS:<nil>}
I0926 22:53:44.153364 1470462 retry.go:31] will retry after 14.143996ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.171148 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8375160d-4822-4f43-87ca-d634808818bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b09240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002068c0 TLS:<nil>}
I0926 22:53:44.171222 1470462 retry.go:31] will retry after 15.261404ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.190248 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[52fa8ce7-e4f0-410f-97ad-7522a08d6557] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc0004d6240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033eb40 TLS:<nil>}
I0926 22:53:44.190302 1470462 retry.go:31] will retry after 45.368777ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.239562 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e1748a8d-6ba1-45a5-ad35-09d818fc70b8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc0004d6580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I0926 22:53:44.239642 1470462 retry.go:31] will retry after 76.361914ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.320011 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36dc0c2c-0115-43a7-8fc3-ec342f84536f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b09380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I0926 22:53:44.320075 1470462 retry.go:31] will retry after 144.150946ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.467710 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5387c68-d3f0-4dc5-98e5-9a173141e3f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc0004d7540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033ec80 TLS:<nil>}
I0926 22:53:44.467792 1470462 retry.go:31] will retry after 105.332106ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.576688 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fafda498-2d0d-40d7-9665-55b2eb361bf9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc000b09480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I0926 22:53:44.576760 1470462 retry.go:31] will retry after 116.209262ms: Temporary Error: unexpected response code: 503
I0926 22:53:44.696014 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c8b7fed-cff5-46f4-99d6-c3ffbac5a357] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:44 GMT]] Body:0xc0004220c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033edc0 TLS:<nil>}
I0926 22:53:44.696099 1470462 retry.go:31] will retry after 485.182753ms: Temporary Error: unexpected response code: 503
I0926 22:53:45.184588 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1042985a-401f-4d92-a75b-8fd34f2f6b10] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:45 GMT]] Body:0xc000251e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I0926 22:53:45.184684 1470462 retry.go:31] will retry after 521.613646ms: Temporary Error: unexpected response code: 503
I0926 22:53:45.710312 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5644c750-427a-488e-9279-85e1f9bce633] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:45 GMT]] Body:0xc000b09580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I0926 22:53:45.710385 1470462 retry.go:31] will retry after 902.733821ms: Temporary Error: unexpected response code: 503
I0926 22:53:46.616312 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6bfdded7-86fb-4058-9340-9cc729eae958] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:46 GMT]] Body:0xc000422180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033ef00 TLS:<nil>}
I0926 22:53:46.616387 1470462 retry.go:31] will retry after 1.442687929s: Temporary Error: unexpected response code: 503
I0926 22:53:48.063167 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e879964b-714a-497a-b050-f323a27db117] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:48 GMT]] Body:0xc000b09680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I0926 22:53:48.063250 1470462 retry.go:31] will retry after 1.703490312s: Temporary Error: unexpected response code: 503
I0926 22:53:49.770571 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0b7ac9a-7551-47da-aa06-76f79962e453] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:49 GMT]] Body:0xc000422280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033f040 TLS:<nil>}
I0926 22:53:49.770634 1470462 retry.go:31] will retry after 2.778834938s: Temporary Error: unexpected response code: 503
I0926 22:53:52.554058 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57d2b429-cc88-4767-bf01-d3642456cd09] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:52 GMT]] Body:0xc0007d0340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I0926 22:53:52.554135 1470462 retry.go:31] will retry after 4.315162486s: Temporary Error: unexpected response code: 503
I0926 22:53:56.875335 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5bb7c491-7eb7-454a-8a6b-5095be34e7c6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:56 GMT]] Body:0xc0007d06c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfa40 TLS:<nil>}
I0926 22:53:56.875422 1470462 retry.go:31] will retry after 5.328706011s: Temporary Error: unexpected response code: 503
I0926 22:54:02.211141 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fe86568-8be2-4549-afc6-ce87160509fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:54:02 GMT]] Body:0xc0004223c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I0926 22:54:02.211213 1470462 retry.go:31] will retry after 7.34410299s: Temporary Error: unexpected response code: 503
I0926 22:54:09.559322 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[beb03cbe-0fc1-4733-8fdb-faf6791b3688] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:54:09 GMT]] Body:0xc000b09740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I0926 22:54:09.559386 1470462 retry.go:31] will retry after 17.177569063s: Temporary Error: unexpected response code: 503
I0926 22:54:26.742295 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9652448d-0adc-4e7d-a503-704eef522d0d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:54:26 GMT]] Body:0xc0004224c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033f180 TLS:<nil>}
I0926 22:54:26.742365 1470462 retry.go:31] will retry after 12.372481858s: Temporary Error: unexpected response code: 503
I0926 22:54:39.118600 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9c07355-cb56-417e-93d3-8908fb69a6d2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:54:39 GMT]] Body:0xc0007d0840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033f2c0 TLS:<nil>}
I0926 22:54:39.118667 1470462 retry.go:31] will retry after 41.738175322s: Temporary Error: unexpected response code: 503
I0926 22:55:20.862394 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a004853c-5b20-4bc1-a366-862fd48c23b3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:55:20 GMT]] Body:0xc0007d0900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I0926 22:55:20.862489 1470462 retry.go:31] will retry after 44.839833361s: Temporary Error: unexpected response code: 503
I0926 22:56:05.705644 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02e79068-6265-4273-9e26-62a4213ea8e1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:56:05 GMT]] Body:0xc0006b2100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000696280 TLS:<nil>}
I0926 22:56:05.705728 1470462 retry.go:31] will retry after 34.055381813s: Temporary Error: unexpected response code: 503
I0926 22:56:39.764592 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[352883c8-0750-437f-9ae2-7a3e80ceff25] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:56:39 GMT]] Body:0xc0006b2180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00033f400 TLS:<nil>}
I0926 22:56:39.764663 1470462 retry.go:31] will retry after 1m9.221947282s: Temporary Error: unexpected response code: 503
I0926 22:57:48.989655 1470462 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28a87606-0da2-4897-a4d1-0f38afef96e4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:57:48 GMT]] Body:0xc0007d0380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006963c0 TLS:<nil>}
I0926 22:57:48.989736 1470462 retry.go:31] will retry after 1m28.138939584s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-618103
helpers_test.go:243: (dbg) docker inspect functional-618103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	        "Created": "2025-09-26T22:44:39.177529673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1446213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:44:39.216160085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hosts",
	        "LogPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3-json.log",
	        "Name": "/functional-618103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-618103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-618103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	                "LowerDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-618103",
	                "Source": "/var/lib/docker/volumes/functional-618103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-618103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-618103",
	                "name.minikube.sigs.k8s.io": "functional-618103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6d4e1c4445c59fbebd8bb1273f4210abdbd4271047b51cd8d41d8ebd4919a5e",
	            "SandboxKey": "/var/run/docker/netns/e6d4e1c4445c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-618103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3f:38:e2:60:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47b79b027c2f31af98f68c030f481ae4c06be4c4ce5c8d33e9a1bc7acdb3fb49",
	                    "EndpointID": "32a9b59825347209e2fa44185c54fe4e9a63f24b54509e66744fdc9b9662afb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-618103",
	                        "40fba9eb93d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-618103 -n functional-618103
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 logs -n 25: (1.008270173s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                    ARGS                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-618103 ssh findmnt -T /mount1                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh            │ functional-618103 ssh findmnt -T /mount1                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh findmnt -T /mount2                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh findmnt -T /mount3                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ mount          │ -p functional-618103 --kill=true                                                                            │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker           │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-618103 --alsologtostderr -v=1                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ license        │                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                     │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                     │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                     │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format short --alsologtostderr                                                 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format yaml --alsologtostderr                                                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh pgrep buildkitd                                                                       │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ image          │ functional-618103 image build -t localhost/my-image:functional-618103 testdata/build --alsologtostderr      │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls                                                                                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format json --alsologtostderr                                                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format table --alsologtostderr                                                 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ service        │ functional-618103 service list                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:57 UTC │ 26 Sep 25 22:58 UTC │
	│ service        │ functional-618103 service list -o json                                                                      │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:58 UTC │ 26 Sep 25 22:58 UTC │
	│ service        │ functional-618103 service --namespace=default --https --url hello-node                                      │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:58 UTC │                     │
	│ service        │ functional-618103 service hello-node --url --format={{.IP}}                                                 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:58 UTC │                     │
	│ service        │ functional-618103 service hello-node --url                                                                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:58 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:53:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:53:42.860082 1470331 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:42.860378 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860389 1470331 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:42.860394 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860610 1470331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:53:42.861084 1470331 out.go:368] Setting JSON to false
	I0926 22:53:42.862233 1470331 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16567,"bootTime":1758910656,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:42.862297 1470331 start.go:140] virtualization: kvm guest
	I0926 22:53:42.864204 1470331 out.go:179] * [functional-618103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:42.865433 1470331 notify.go:220] Checking for updates...
	I0926 22:53:42.865460 1470331 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:42.866821 1470331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:42.868122 1470331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:53:42.869399 1470331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:53:42.870466 1470331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:42.871525 1470331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:42.873004 1470331 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:53:42.873467 1470331 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:42.897179 1470331 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:53:42.897271 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:42.955533 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:42.944538569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:42.955662 1470331 docker.go:318] overlay module found
	I0926 22:53:42.957493 1470331 out.go:179] * Using the docker driver based on existing profile
	I0926 22:53:42.958658 1470331 start.go:304] selected driver: docker
	I0926 22:53:42.958672 1470331 start.go:924] validating driver "docker" against &{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:42.958752 1470331 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:42.958844 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:43.011560 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:43.002442894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:43.012166 1470331 cni.go:84] Creating CNI manager for ""
	I0926 22:53:43.012228 1470331 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:53:43.012283 1470331 start.go:348] cluster config:
	{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:43.014130 1470331 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.534888630Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.552473025Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.577382631Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:45 functional-618103 dockerd[6907]: time="2025-09-26T22:53:45.290018725Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:56 functional-618103 dockerd[6907]: time="2025-09-26T22:53:56.219133133Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:56 functional-618103 dockerd[6907]: time="2025-09-26T22:53:56.248363539Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:58 functional-618103 dockerd[6907]: time="2025-09-26T22:53:58.217156064Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:58 functional-618103 dockerd[6907]: time="2025-09-26T22:53:58.245543449Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:54:25 functional-618103 dockerd[6907]: time="2025-09-26T22:54:25.222693657Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:54:25 functional-618103 dockerd[6907]: time="2025-09-26T22:54:25.253441326Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:54:26 functional-618103 dockerd[6907]: time="2025-09-26T22:54:26.219774042Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:54:26 functional-618103 dockerd[6907]: time="2025-09-26T22:54:26.250590001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:55:15 functional-618103 dockerd[6907]: time="2025-09-26T22:55:15.220683095Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:55:15 functional-618103 dockerd[6907]: time="2025-09-26T22:55:15.316199586Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:55:15 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:55:15Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Sep 26 22:55:18 functional-618103 dockerd[6907]: time="2025-09-26T22:55:18.220299141Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:55:18 functional-618103 dockerd[6907]: time="2025-09-26T22:55:18.244343669Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:56:42 functional-618103 dockerd[6907]: time="2025-09-26T22:56:42.224380428Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:56:42 functional-618103 dockerd[6907]: time="2025-09-26T22:56:42.259081089Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:56:50 functional-618103 dockerd[6907]: time="2025-09-26T22:56:50.218255322Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:56:50 functional-618103 dockerd[6907]: time="2025-09-26T22:56:50.248224667Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:58:35 functional-618103 dockerd[6907]: time="2025-09-26T22:58:35.385338222Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:58:35 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:58:35Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 26 22:58:39 functional-618103 dockerd[6907]: time="2025-09-26T22:58:39.292567118Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:58:40 functional-618103 dockerd[6907]: time="2025-09-26T22:58:40.299396487Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b70539961c432       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   ecbf501ce8d99       busybox-mount
	05018b68e7ad4       mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                         10 minutes ago      Running             mysql                     0                   dae83eded4d5b       mysql-5bb876957f-vxwl9
	4b943b553941a       52546a367cc9e                                                                                         11 minutes ago      Running             coredns                   2                   3bc210e6c6ac3       coredns-66bc5c9577-6k65s
	ad0540cd441bd       6e38f40d628db                                                                                         11 minutes ago      Running             storage-provisioner       3                   d3ee80c4aeda3       storage-provisioner
	235de006a8779       df0860106674d                                                                                         11 minutes ago      Running             kube-proxy                2                   48fda113a91e1       kube-proxy-pf9r9
	3ab67dbd917d1       90550c43ad2bc                                                                                         11 minutes ago      Running             kube-apiserver            0                   aa753dce76f58       kube-apiserver-functional-618103
	35544b6f97ec2       a0af72f2ec6d6                                                                                         11 minutes ago      Running             kube-controller-manager   2                   2bc0ac715e665       kube-controller-manager-functional-618103
	2b6dc397a1c84       46169d968e920                                                                                         11 minutes ago      Running             kube-scheduler            3                   ee9b228f455e5       kube-scheduler-functional-618103
	a17fd14faa229       5f1f5298c888d                                                                                         11 minutes ago      Running             etcd                      2                   bcc4a4a7e9ec6       etcd-functional-618103
	983efcc4c2219       46169d968e920                                                                                         11 minutes ago      Exited              kube-scheduler            2                   fb0acbcbb6fde       kube-scheduler-functional-618103
	931219062fe5d       6e38f40d628db                                                                                         12 minutes ago      Exited              storage-provisioner       2                   71076a317d56c       storage-provisioner
	6fbe46e6db643       df0860106674d                                                                                         12 minutes ago      Exited              kube-proxy                1                   e383d1a41071d       kube-proxy-pf9r9
	d3e2802fbfa24       a0af72f2ec6d6                                                                                         12 minutes ago      Exited              kube-controller-manager   1                   f9e4d86ab1fb9       kube-controller-manager-functional-618103
	488415e49873e       52546a367cc9e                                                                                         12 minutes ago      Exited              coredns                   1                   33d7cef7963e2       coredns-66bc5c9577-6k65s
	8881aa79fb35e       5f1f5298c888d                                                                                         12 minutes ago      Exited              etcd                      1                   7f1fdbbb616db       etcd-functional-618103
	
	
	==> coredns [488415e49873] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34736 - 5136 "HINFO IN 4907265284620355639.6271624726996692906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.436648483s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4b943b553941] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51912 - 9260 "HINFO IN 4297598984794218073.7043669035780725794. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02177634s
	
	
	==> describe nodes <==
	Name:               functional-618103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-618103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-618103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_44_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:44:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-618103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-618103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 a01939c13db74c7a972b5645dc883cba
	  System UUID:                11f06017-3556-4c19-9ebb-d79a2382242d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fzr2x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-w9ff5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-5bb876957f-vxwl9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     11m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-6k65s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-618103                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-618103              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-618103     200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pf9r9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-618103              100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-cjqdd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fhn94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Warning  ContainerGCFailed        11m (x2 over 12m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[Sep26 22:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e b1 c7 4a e2 6c 08 06
	[Sep26 22:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea bc 84 e9 6e c4 08 06
	[Sep26 22:47] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d7 28 52 72 da 08 06
	
	
	==> etcd [8881aa79fb35] <==
	{"level":"warn","ts":"2025-09-26T22:46:11.662201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.668830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.674812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.690605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.699133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.705381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.752971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:58.674439Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:46:58.674530Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:46:58.674631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676133Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.676247Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:47:05.676333Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:47:05.676346Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676357Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676406Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:47:05.676408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:47:05.676420Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679143Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:47:05.679207Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679232Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:47:05.679238Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a17fd14faa22] <==
	{"level":"warn","ts":"2025-09-26T22:47:15.280370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.290315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.296457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.303961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.312495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.318859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.324819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.331796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.338660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.345818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.353890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.361233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.367442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.389077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.396638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.402935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.410587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.418655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.432386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.439253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.445958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.502995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:57:14.980000Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1268}
	{"level":"info","ts":"2025-09-26T22:57:14.999136Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1268,"took":"18.767504ms","hash":2740310814,"current-db-size-bytes":3948544,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2093056,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-26T22:57:14.999183Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2740310814,"revision":1268,"compact-revision":-1}
	
	
	==> kernel <==
	 22:58:44 up  4:41,  0 users,  load average: 0.09, 0.36, 0.90
	Linux functional-618103 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3ab67dbd917d] <==
	E0926 22:47:55.564718       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37350: use of closed network connection
	E0926 22:47:56.728343       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37366: use of closed network connection
	E0926 22:47:58.376463       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37390: use of closed network connection
	I0926 22:47:58.506130       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.213.113"}
	I0926 22:48:15.343140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:36.122427       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:28.174004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:46.447396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:53.418177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:10.150540       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:08.395336       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.260095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:27.812871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:43.903184       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:53:43.987321       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.146.163"}
	I0926 22:53:44.007330       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.253.246"}
	I0926 22:53:56.112344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:54:35.984530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:55:09.285049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:55:54.680603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:56:31.775105       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:57:15.885868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:57:23.024709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:58:00.128185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:58:25.478507       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [35544b6f97ec] <==
	I0926 22:47:19.269262       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:47:19.269288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:47:19.269310       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:47:19.269345       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:47:19.269354       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:47:19.269355       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:47:19.269367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:47:19.269408       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0926 22:47:19.269458       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:47:19.270616       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:47:19.270654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:47:19.270705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:47:19.274641       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0926 22:47:19.275930       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.275977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.277032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:47:19.278282       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:47:19.282523       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0926 22:47:19.290777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:53:43.946336       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949739       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.952844       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.955073       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.958713       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d3e2802fbfa2] <==
	I0926 22:46:19.234504       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:46:19.236815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:46:19.237970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:46:19.238038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:46:19.238071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:46:19.244342       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:46:19.247607       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:46:19.257028       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:46:19.257138       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0926 22:46:19.257161       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:46:19.257187       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:19.257202       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:46:19.257218       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:46:19.257356       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:46:19.257620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:46:19.257635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:46:19.257650       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:46:19.257654       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0926 22:46:19.257809       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:46:19.257998       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-618103"
	I0926 22:46:19.258105       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:46:19.259596       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0926 22:46:19.263705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:46:19.263721       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:46:19.276803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [235de006a877] <==
	I0926 22:47:16.787337       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:47:16.840579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:47:16.941472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:47:16.941535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:47:16.941686       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:47:16.967763       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:47:16.967831       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:47:16.974095       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:47:16.974545       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:47:16.974585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:16.976066       1 config.go:200] "Starting service config controller"
	I0926 22:47:16.976208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:47:16.976069       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:47:16.976066       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:47:16.976284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:47:16.976354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:47:16.976214       1 config.go:309] "Starting node config controller"
	I0926 22:47:16.976844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:47:16.976855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:47:17.076460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:47:17.076543       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:47:17.077883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6fbe46e6db64] <==
	I0926 22:46:24.477902       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:46:24.540344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:46:24.641416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:46:24.641463       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:46:24.641586       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:46:24.663582       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:46:24.663633       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:46:24.668965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:46:24.669437       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:46:24.669458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:24.671249       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:46:24.671283       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:46:24.671340       1 config.go:200] "Starting service config controller"
	I0926 22:46:24.671555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:46:24.671390       1 config.go:309] "Starting node config controller"
	I0926 22:46:24.671594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:46:24.671602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:46:24.671646       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:46:24.671719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:46:24.771534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:46:24.772269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:46:24.772259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2b6dc397a1c8] <==
	I0926 22:47:14.817040       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:15.880591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:47:15.880632       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:47:15.880643       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:15.880656       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:15.894307       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:15.894494       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:15.896237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896273       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:47:15.896606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:15.997423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [983efcc4c221] <==
	I0926 22:47:11.056121       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:11.419127       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W0926 22:47:11.419160       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:11.419168       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:11.426056       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:11.426079       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0926 22:47:11.426098       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0926 22:47:11.427765       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.427800       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0926 22:47:11.428164       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I0926 22:47:11.428464       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:11.428566       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0926 22:47:11.428505       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428606       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428772       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:47:11.428793       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:47:11.428828       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:47:11.428851       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 26 22:58:12 functional-618103 kubelet[8543]: E0926 22:58:12.203033    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:58:13 functional-618103 kubelet[8543]: E0926 22:58:13.203157    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:58:14 functional-618103 kubelet[8543]: E0926 22:58:14.202623    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:58:14 functional-618103 kubelet[8543]: E0926 22:58:14.204440    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:58:20 functional-618103 kubelet[8543]: E0926 22:58:20.205081    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:58:23 functional-618103 kubelet[8543]: E0926 22:58:23.203742    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:58:25 functional-618103 kubelet[8543]: E0926 22:58:25.203116    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:58:25 functional-618103 kubelet[8543]: E0926 22:58:25.204933    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:58:27 functional-618103 kubelet[8543]: E0926 22:58:27.203261    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:58:27 functional-618103 kubelet[8543]: E0926 22:58:27.205181    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:58:34 functional-618103 kubelet[8543]: E0926 22:58:34.205263    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:58:35 functional-618103 kubelet[8543]: E0926 22:58:35.387528    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:58:35 functional-618103 kubelet[8543]: E0926 22:58:35.387586    8543 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 26 22:58:35 functional-618103 kubelet[8543]: E0926 22:58:35.387689    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(f724a102-f6d8-4b2f-81d3-f320399fc9ec): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:58:35 functional-618103 kubelet[8543]: E0926 22:58:35.387736    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:58:38 functional-618103 kubelet[8543]: E0926 22:58:38.202789    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:58:39 functional-618103 kubelet[8543]: E0926 22:58:39.295117    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:58:39 functional-618103 kubelet[8543]: E0926 22:58:39.295171    8543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:58:39 functional-618103 kubelet[8543]: E0926 22:58:39.295292    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(b75937ed-7333-4506-a767-ccd53069b9d4): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:58:39 functional-618103 kubelet[8543]: E0926 22:58:39.295331    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:58:40 functional-618103 kubelet[8543]: E0926 22:58:40.301839    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 26 22:58:40 functional-618103 kubelet[8543]: E0926 22:58:40.301900    8543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 26 22:58:40 functional-618103 kubelet[8543]: E0926 22:58:40.302004    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-w9ff5_default(ae581ca3-c736-4362-b682-e2a0f6c6732e): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:58:40 functional-618103 kubelet[8543]: E0926 22:58:40.302045    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:58:42 functional-618103 kubelet[8543]: E0926 22:58:42.204378    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	
	
	==> storage-provisioner [931219062fe5] <==
	W0926 22:46:33.053707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:36.651697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:39.705759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.727822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.732935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.733086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:46:42.733180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6505a3-eb20-46af-b1ab-54226975775d", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0 became leader
	I0926 22:46:42.733254       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:42.735190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.739871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.833591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:44.743050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:44.747089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.750572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.755678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.758889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.762954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.766338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.770297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.772964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.776945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.780028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.783787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.787442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.791219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad0540cd441b] <==
	W0926 22:58:18.587385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:20.590646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:20.594595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:22.597463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:22.602674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:24.605679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:24.609782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:26.612750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:26.617122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:28.620268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:28.624606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:30.627672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:30.633027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:32.635899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:32.639824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:34.642881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:34.648006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:36.651654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:36.655414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:38.658618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:38.663426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:40.666709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:40.670710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:42.673397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:58:42.677610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
helpers_test.go:269: (dbg) Run:  kubectl --context functional-618103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1 (93.680333ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:53:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b70539961c432e23ea0ea9d2f2ceca8cde5a3580e7d978b3be8d3ade4c23bee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:53:34 +0000
	      Finished:     Fri, 26 Sep 2025 22:53:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdb5q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zdb5q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-618103
	  Normal  Pulling    5m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.484s (1.484s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fzr2x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwg9w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lwg9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fzr2x to functional-618103
	  Normal   Pulling    7m43s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m43s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m43s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    42s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     42s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w9ff5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zhs6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zhs6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w9ff5 to functional-618103
	  Warning  Failed     9m23s (x3 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m2s (x5 over 11m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m2s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m2s (x2 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    58s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     43s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfhnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cfhnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/nginx-svc to functional-618103
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m53s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m53s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m53s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    60s (x43 over 10m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s (x44 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jbh5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jbh5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-618103
	  Normal   Pulling    8m13s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     8m13s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m13s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    46s (x42 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     46s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-cjqdd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fhn94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-618103 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-618103 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w9ff5" [ae581ca3-c736-4362-b682-e2a0f6c6732e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-26 22:57:43.840011249 +0000 UTC m=+1733.795994192
functional_test.go:1645: (dbg) Run:  kubectl --context functional-618103 describe po hello-node-connect-7d85dfc575-w9ff5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-618103 describe po hello-node-connect-7d85dfc575-w9ff5 -n default:
Name:             hello-node-connect-7d85dfc575-w9ff5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618103/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:47:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zhs6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7zhs6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w9ff5 to functional-618103
Warning  Failed     8m22s (x3 over 9m54s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m1s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m54s)    kubelet            Error: ErrImagePull
Warning  Failed     7m1s (x2 over 9m9s)     kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m54s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-618103 logs hello-node-connect-7d85dfc575-w9ff5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-618103 logs hello-node-connect-7d85dfc575-w9ff5 -n default: exit status 1 (73.639676ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w9ff5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-618103 logs hello-node-connect-7d85dfc575-w9ff5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-618103 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w9ff5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618103/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:47:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zhs6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7zhs6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w9ff5 to functional-618103
Warning  Failed     8m23s (x3 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m55s)    kubelet            Error: ErrImagePull
Warning  Failed     7m2s (x2 over 9m10s)    kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m55s (x20 over 9m55s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-618103 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-618103 logs -l app=hello-node-connect: exit status 1 (63.61016ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w9ff5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-618103 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-618103 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.224.140
IPs:                      10.102.224.140
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31889/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-618103
helpers_test.go:243: (dbg) docker inspect functional-618103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	        "Created": "2025-09-26T22:44:39.177529673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1446213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:44:39.216160085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hosts",
	        "LogPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3-json.log",
	        "Name": "/functional-618103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-618103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-618103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	                "LowerDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-618103",
	                "Source": "/var/lib/docker/volumes/functional-618103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-618103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-618103",
	                "name.minikube.sigs.k8s.io": "functional-618103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6d4e1c4445c59fbebd8bb1273f4210abdbd4271047b51cd8d41d8ebd4919a5e",
	            "SandboxKey": "/var/run/docker/netns/e6d4e1c4445c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-618103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3f:38:e2:60:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47b79b027c2f31af98f68c030f481ae4c06be4c4ce5c8d33e9a1bc7acdb3fb49",
	                    "EndpointID": "32a9b59825347209e2fa44185c54fe4e9a63f24b54509e66744fdc9b9662afb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-618103",
	                        "40fba9eb93d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-618103 -n functional-618103
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-618103 ssh -- ls -la /mount-9p                                                                          │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh sudo umount -f /mount-9p                                                                     │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount          │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount2 --alsologtostderr -v=1 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount          │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount1 --alsologtostderr -v=1 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount          │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount3 --alsologtostderr -v=1 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh            │ functional-618103 ssh findmnt -T /mount1                                                                           │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh            │ functional-618103 ssh findmnt -T /mount1                                                                           │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh findmnt -T /mount2                                                                           │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh findmnt -T /mount3                                                                           │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ mount          │ -p functional-618103 --kill=true                                                                                   │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker        │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker        │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-618103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-618103 --alsologtostderr -v=1                                                     │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ license        │                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                            │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                            │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-618103 update-context --alsologtostderr -v=2                                                            │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format short --alsologtostderr                                                        │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format yaml --alsologtostderr                                                         │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-618103 ssh pgrep buildkitd                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ image          │ functional-618103 image build -t localhost/my-image:functional-618103 testdata/build --alsologtostderr             │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls                                                                                         │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format json --alsologtostderr                                                         │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-618103 image ls --format table --alsologtostderr                                                        │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:53:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:53:42.860082 1470331 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:42.860378 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860389 1470331 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:42.860394 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860610 1470331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:53:42.861084 1470331 out.go:368] Setting JSON to false
	I0926 22:53:42.862233 1470331 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16567,"bootTime":1758910656,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:42.862297 1470331 start.go:140] virtualization: kvm guest
	I0926 22:53:42.864204 1470331 out.go:179] * [functional-618103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:42.865433 1470331 notify.go:220] Checking for updates...
	I0926 22:53:42.865460 1470331 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:42.866821 1470331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:42.868122 1470331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:53:42.869399 1470331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:53:42.870466 1470331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:42.871525 1470331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:42.873004 1470331 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:53:42.873467 1470331 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:42.897179 1470331 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:53:42.897271 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:42.955533 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:42.944538569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:42.955662 1470331 docker.go:318] overlay module found
	I0926 22:53:42.957493 1470331 out.go:179] * Using the docker driver based on existing profile
	I0926 22:53:42.958658 1470331 start.go:304] selected driver: docker
	I0926 22:53:42.958672 1470331 start.go:924] validating driver "docker" against &{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:42.958752 1470331 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:42.958844 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:43.011560 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:43.002442894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:43.012166 1470331 cni.go:84] Creating CNI manager for ""
	I0926 22:53:43.012228 1470331 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:53:43.012283 1470331 start.go:348] cluster config:
	{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:43.014130 1470331 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 26 22:53:36 functional-618103 dockerd[6907]: time="2025-09-26T22:53:36.276677245Z" level=info msg="ignoring event" container=ecbf501ce8d99633476b94a501b15709ce1e05f42744847ae5b074277ecc8134 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:53:44 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e6015c4dcc62a36f1fa7a577aff6cd93cca4657d956e07d4c099ac15984ea206/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:53:44 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/554145ac2092170655e1ec6d97caa1dbb9632fe0c0019acde1beb4875d3b901f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.507016262Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.534888630Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.552473025Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.577382631Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:45 functional-618103 dockerd[6907]: time="2025-09-26T22:53:45.290018725Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:56 functional-618103 dockerd[6907]: time="2025-09-26T22:53:56.219133133Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:56 functional-618103 dockerd[6907]: time="2025-09-26T22:53:56.248363539Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:58 functional-618103 dockerd[6907]: time="2025-09-26T22:53:58.217156064Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:58 functional-618103 dockerd[6907]: time="2025-09-26T22:53:58.245543449Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:54:25 functional-618103 dockerd[6907]: time="2025-09-26T22:54:25.222693657Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:54:25 functional-618103 dockerd[6907]: time="2025-09-26T22:54:25.253441326Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:54:26 functional-618103 dockerd[6907]: time="2025-09-26T22:54:26.219774042Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:54:26 functional-618103 dockerd[6907]: time="2025-09-26T22:54:26.250590001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:55:15 functional-618103 dockerd[6907]: time="2025-09-26T22:55:15.220683095Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:55:15 functional-618103 dockerd[6907]: time="2025-09-26T22:55:15.316199586Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:55:15 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:55:15Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Sep 26 22:55:18 functional-618103 dockerd[6907]: time="2025-09-26T22:55:18.220299141Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:55:18 functional-618103 dockerd[6907]: time="2025-09-26T22:55:18.244343669Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:56:42 functional-618103 dockerd[6907]: time="2025-09-26T22:56:42.224380428Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:56:42 functional-618103 dockerd[6907]: time="2025-09-26T22:56:42.259081089Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:56:50 functional-618103 dockerd[6907]: time="2025-09-26T22:56:50.218255322Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:56:50 functional-618103 dockerd[6907]: time="2025-09-26T22:56:50.248224667Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b70539961c432       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   ecbf501ce8d99       busybox-mount
	05018b68e7ad4       mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                         9 minutes ago       Running             mysql                     0                   dae83eded4d5b       mysql-5bb876957f-vxwl9
	4b943b553941a       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   3bc210e6c6ac3       coredns-66bc5c9577-6k65s
	ad0540cd441bd       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   d3ee80c4aeda3       storage-provisioner
	235de006a8779       df0860106674d                                                                                         10 minutes ago      Running             kube-proxy                2                   48fda113a91e1       kube-proxy-pf9r9
	3ab67dbd917d1       90550c43ad2bc                                                                                         10 minutes ago      Running             kube-apiserver            0                   aa753dce76f58       kube-apiserver-functional-618103
	35544b6f97ec2       a0af72f2ec6d6                                                                                         10 minutes ago      Running             kube-controller-manager   2                   2bc0ac715e665       kube-controller-manager-functional-618103
	2b6dc397a1c84       46169d968e920                                                                                         10 minutes ago      Running             kube-scheduler            3                   ee9b228f455e5       kube-scheduler-functional-618103
	a17fd14faa229       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      2                   bcc4a4a7e9ec6       etcd-functional-618103
	983efcc4c2219       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            2                   fb0acbcbb6fde       kube-scheduler-functional-618103
	931219062fe5d       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   71076a317d56c       storage-provisioner
	6fbe46e6db643       df0860106674d                                                                                         11 minutes ago      Exited              kube-proxy                1                   e383d1a41071d       kube-proxy-pf9r9
	d3e2802fbfa24       a0af72f2ec6d6                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   f9e4d86ab1fb9       kube-controller-manager-functional-618103
	488415e49873e       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   1                   33d7cef7963e2       coredns-66bc5c9577-6k65s
	8881aa79fb35e       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      1                   7f1fdbbb616db       etcd-functional-618103
	
	
	==> coredns [488415e49873] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34736 - 5136 "HINFO IN 4907265284620355639.6271624726996692906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.436648483s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4b943b553941] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51912 - 9260 "HINFO IN 4297598984794218073.7043669035780725794. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02177634s
	
	
	==> describe nodes <==
	Name:               functional-618103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-618103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-618103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_44_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:44:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-618103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:57:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:54:15 +0000   Fri, 26 Sep 2025 22:44:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-618103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 a01939c13db74c7a972b5645dc883cba
	  System UUID:                11f06017-3556-4c19-9ebb-d79a2382242d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fzr2x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-w9ff5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-vxwl9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 coredns-66bc5c9577-6k65s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-618103                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-618103              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-618103     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pf9r9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-618103              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-cjqdd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fhn94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Warning  ContainerGCFailed        10m (x2 over 11m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[Sep26 22:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e b1 c7 4a e2 6c 08 06
	[Sep26 22:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea bc 84 e9 6e c4 08 06
	[Sep26 22:47] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d7 28 52 72 da 08 06
	
	
	==> etcd [8881aa79fb35] <==
	{"level":"warn","ts":"2025-09-26T22:46:11.662201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.668830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.674812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.690605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.699133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.705381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.752971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:58.674439Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:46:58.674530Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:46:58.674631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676133Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.676247Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:47:05.676333Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:47:05.676346Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676357Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676406Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:47:05.676408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:47:05.676420Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679143Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:47:05.679207Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679232Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:47:05.679238Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a17fd14faa22] <==
	{"level":"warn","ts":"2025-09-26T22:47:15.280370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.290315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.296457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.303961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.312495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.318859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.324819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.331796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.338660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.345818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.353890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.361233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.367442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.389077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.396638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.402935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.410587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.418655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.432386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.439253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.445958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.502995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:57:14.980000Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1268}
	{"level":"info","ts":"2025-09-26T22:57:14.999136Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1268,"took":"18.767504ms","hash":2740310814,"current-db-size-bytes":3948544,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2093056,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-26T22:57:14.999183Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2740310814,"revision":1268,"compact-revision":-1}
	
	
	==> kernel <==
	 22:57:45 up  4:40,  0 users,  load average: 0.11, 0.41, 0.95
	Linux functional-618103 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3ab67dbd917d] <==
	I0926 22:47:41.093036       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.151.149"}
	I0926 22:47:43.522048       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.224.140"}
	E0926 22:47:55.564718       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37350: use of closed network connection
	E0926 22:47:56.728343       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37366: use of closed network connection
	E0926 22:47:58.376463       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37390: use of closed network connection
	I0926 22:47:58.506130       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.213.113"}
	I0926 22:48:15.343140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:36.122427       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:28.174004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:46.447396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:53.418177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:10.150540       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:08.395336       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.260095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:27.812871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:43.903184       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:53:43.987321       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.146.163"}
	I0926 22:53:44.007330       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.253.246"}
	I0926 22:53:56.112344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:54:35.984530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:55:09.285049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:55:54.680603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:56:31.775105       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:57:15.885868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:57:23.024709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [35544b6f97ec] <==
	I0926 22:47:19.269262       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:47:19.269288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:47:19.269310       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:47:19.269345       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:47:19.269354       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:47:19.269355       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:47:19.269367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:47:19.269408       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0926 22:47:19.269458       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:47:19.270616       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:47:19.270654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:47:19.270705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:47:19.274641       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0926 22:47:19.275930       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.275977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.277032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:47:19.278282       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:47:19.282523       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0926 22:47:19.290777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:53:43.946336       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949739       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.952844       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.955073       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.958713       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d3e2802fbfa2] <==
	I0926 22:46:19.234504       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:46:19.236815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:46:19.237970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:46:19.238038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:46:19.238071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:46:19.244342       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:46:19.247607       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:46:19.257028       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:46:19.257138       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0926 22:46:19.257161       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:46:19.257187       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:19.257202       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:46:19.257218       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:46:19.257356       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:46:19.257620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:46:19.257635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:46:19.257650       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:46:19.257654       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0926 22:46:19.257809       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:46:19.257998       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-618103"
	I0926 22:46:19.258105       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:46:19.259596       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0926 22:46:19.263705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:46:19.263721       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:46:19.276803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [235de006a877] <==
	I0926 22:47:16.787337       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:47:16.840579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:47:16.941472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:47:16.941535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:47:16.941686       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:47:16.967763       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:47:16.967831       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:47:16.974095       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:47:16.974545       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:47:16.974585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:16.976066       1 config.go:200] "Starting service config controller"
	I0926 22:47:16.976208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:47:16.976069       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:47:16.976066       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:47:16.976284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:47:16.976354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:47:16.976214       1 config.go:309] "Starting node config controller"
	I0926 22:47:16.976844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:47:16.976855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:47:17.076460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:47:17.076543       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:47:17.077883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6fbe46e6db64] <==
	I0926 22:46:24.477902       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:46:24.540344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:46:24.641416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:46:24.641463       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:46:24.641586       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:46:24.663582       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:46:24.663633       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:46:24.668965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:46:24.669437       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:46:24.669458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:24.671249       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:46:24.671283       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:46:24.671340       1 config.go:200] "Starting service config controller"
	I0926 22:46:24.671555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:46:24.671390       1 config.go:309] "Starting node config controller"
	I0926 22:46:24.671594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:46:24.671602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:46:24.671646       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:46:24.671719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:46:24.771534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:46:24.772269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:46:24.772259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2b6dc397a1c8] <==
	I0926 22:47:14.817040       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:15.880591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:47:15.880632       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:47:15.880643       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:15.880656       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:15.894307       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:15.894494       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:15.896237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896273       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:47:15.896606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:15.997423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [983efcc4c221] <==
	I0926 22:47:11.056121       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:11.419127       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W0926 22:47:11.419160       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:11.419168       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:11.426056       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:11.426079       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0926 22:47:11.426098       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0926 22:47:11.427765       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.427800       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0926 22:47:11.428164       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I0926 22:47:11.428464       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:11.428566       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0926 22:47:11.428505       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428606       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428772       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:47:11.428793       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:47:11.428828       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:47:11.428851       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 26 22:56:56 functional-618103 kubelet[8543]: E0926 22:56:56.204897    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:56:56 functional-618103 kubelet[8543]: E0926 22:56:56.204915    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:57:01 functional-618103 kubelet[8543]: E0926 22:57:01.205166    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:57:02 functional-618103 kubelet[8543]: E0926 22:57:02.202609    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:57:04 functional-618103 kubelet[8543]: E0926 22:57:04.202558    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:57:04 functional-618103 kubelet[8543]: E0926 22:57:04.202589    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:57:07 functional-618103 kubelet[8543]: E0926 22:57:07.205017    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:57:09 functional-618103 kubelet[8543]: E0926 22:57:09.205089    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:57:12 functional-618103 kubelet[8543]: E0926 22:57:12.205234    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:57:14 functional-618103 kubelet[8543]: E0926 22:57:14.203361    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:57:17 functional-618103 kubelet[8543]: E0926 22:57:17.202918    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:57:18 functional-618103 kubelet[8543]: E0926 22:57:18.203248    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:57:20 functional-618103 kubelet[8543]: E0926 22:57:20.204536    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:57:20 functional-618103 kubelet[8543]: E0926 22:57:20.204629    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:57:25 functional-618103 kubelet[8543]: E0926 22:57:25.204221    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:57:27 functional-618103 kubelet[8543]: E0926 22:57:27.203140    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:57:31 functional-618103 kubelet[8543]: E0926 22:57:31.203215    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:57:31 functional-618103 kubelet[8543]: E0926 22:57:31.205345    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:57:32 functional-618103 kubelet[8543]: E0926 22:57:32.203140    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:57:32 functional-618103 kubelet[8543]: E0926 22:57:32.205006    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:57:36 functional-618103 kubelet[8543]: E0926 22:57:36.204588    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:57:38 functional-618103 kubelet[8543]: E0926 22:57:38.202990    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:57:42 functional-618103 kubelet[8543]: E0926 22:57:42.205179    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:57:44 functional-618103 kubelet[8543]: E0926 22:57:44.204858    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	Sep 26 22:57:45 functional-618103 kubelet[8543]: E0926 22:57:45.202827    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	
	
	==> storage-provisioner [931219062fe5] <==
	W0926 22:46:33.053707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:36.651697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:39.705759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.727822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.732935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.733086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:46:42.733180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6505a3-eb20-46af-b1ab-54226975775d", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0 became leader
	I0926 22:46:42.733254       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:42.735190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.739871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.833591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:44.743050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:44.747089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.750572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.755678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.758889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.762954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.766338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.770297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.772964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.776945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.780028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.783787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.787442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.791219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad0540cd441b] <==
	W0926 22:57:20.367223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:22.370132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:22.374410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:24.378270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:24.383619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:26.387010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:26.390940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:28.394346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:28.398440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:30.401654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:30.406428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:32.409523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:32.414514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:34.417530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:34.421801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:36.424739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:36.428760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:38.432987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:38.436879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:40.439807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:40.443703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:42.446616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:42.451336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:44.454302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:57:44.458712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
helpers_test.go:269: (dbg) Run:  kubectl --context functional-618103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1 (95.707863ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:53:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b70539961c432e23ea0ea9d2f2ceca8cde5a3580e7d978b3be8d3ade4c23bee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:53:34 +0000
	      Finished:     Fri, 26 Sep 2025 22:53:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdb5q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zdb5q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-618103
	  Normal  Pulling    4m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.484s (1.484s including waiting). Image size: 4403845 bytes.
	  Normal  Created    4m11s  kubelet            Created container: mount-munger
	  Normal  Started    4m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fzr2x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwg9w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lwg9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fzr2x to functional-618103
	  Normal   Pulling    6m44s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m44s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m28s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w9ff5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zhs6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zhs6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w9ff5 to functional-618103
	  Warning  Failed     8m24s (x3 over 9m56s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 9m56s)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m3s (x2 over 9m11s)    kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m56s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m42s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfhnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cfhnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-618103
	  Warning  Failed     9m56s                   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m54s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m54s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m54s (x4 over 9m42s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m52s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x43 over 9m56s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jbh5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jbh5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m59s                   default-scheduler  Successfully assigned default/sp-pod to functional-618103
	  Normal   Pulling    7m14s (x5 over 9m58s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m14s (x5 over 9m56s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m14s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m50s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 9m56s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-cjqdd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fhn94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.61s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a6887948-c51f-4af9-b292-65ef9462642c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005096029s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-618103 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-618103 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-618103 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-618103 apply -f testdata/storage-provisioner/pod.yaml
I0926 22:47:46.057977 1399974 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f724a102-f6d8-4b2f-81d3-f320399fc9ec] Pending
E0926 22:47:46.317610 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [f724a102-f6d8-4b2f-81d3-f320399fc9ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-26 22:53:46.363979385 +0000 UTC m=+1496.319962318
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-618103 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-618103 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618103/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:47:46 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jbh5h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-jbh5h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-618103
Normal   Pulling    3m15s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m15s (x5 over 5m57s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m15s (x5 over 5m57s)  kubelet            Error: ErrImagePull
Warning  Failed     51s (x20 over 5m57s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    37s (x21 over 5m57s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-618103 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-618103 logs sp-pod -n default: exit status 1 (61.207868ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-618103 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-618103
helpers_test.go:243: (dbg) docker inspect functional-618103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	        "Created": "2025-09-26T22:44:39.177529673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1446213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:44:39.216160085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/hosts",
	        "LogPath": "/var/lib/docker/containers/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3/40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3-json.log",
	        "Name": "/functional-618103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-618103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-618103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fba9eb93d7f270a4f2e71131bb7f97ec802b9fc53da790d6896a05343e60d3",
	                "LowerDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a-init/diff:/var/lib/docker/overlay2/827bbee2845c10b8115687dac9c29e877014c7a0c40dad5ffa79d8df88591ec1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/645688aff77bf9052ce27f463c2a3ac192b16b83dc6e4ddfb66b81c19b29912a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-618103",
	                "Source": "/var/lib/docker/volumes/functional-618103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-618103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-618103",
	                "name.minikube.sigs.k8s.io": "functional-618103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6d4e1c4445c59fbebd8bb1273f4210abdbd4271047b51cd8d41d8ebd4919a5e",
	            "SandboxKey": "/var/run/docker/netns/e6d4e1c4445c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-618103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3f:38:e2:60:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47b79b027c2f31af98f68c030f481ae4c06be4c4ce5c8d33e9a1bc7acdb3fb49",
	                    "EndpointID": "32a9b59825347209e2fa44185c54fe4e9a63f24b54509e66744fdc9b9662afb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-618103",
	                        "40fba9eb93d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-618103 -n functional-618103
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 logs -n 25: (1.001304015s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdany-port2469741919/001:/mount-9p --alsologtostderr -v=1                   │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh -- ls -la /mount-9p                                                                                         │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh cat /mount-9p/test-1758927210743049400                                                                      │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh stat /mount-9p/created-by-test                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh stat /mount-9p/created-by-pod                                                                               │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh sudo umount -f /mount-9p                                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ mount     │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdspecific-port3321653878/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh -- ls -la /mount-9p                                                                                         │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh sudo umount -f /mount-9p                                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount     │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount2 --alsologtostderr -v=1                │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount     │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount1 --alsologtostderr -v=1                │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ mount     │ -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount3 --alsologtostderr -v=1                │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount1                                                                                          │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ ssh       │ functional-618103 ssh findmnt -T /mount1                                                                                          │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh findmnt -T /mount2                                                                                          │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh       │ functional-618103 ssh findmnt -T /mount3                                                                                          │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ mount     │ -p functional-618103 --kill=true                                                                                                  │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start     │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                       │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start     │ -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                       │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start     │ -p functional-618103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                                 │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-618103 --alsologtostderr -v=1                                                                    │ functional-618103 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:53:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:53:42.860082 1470331 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:42.860378 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860389 1470331 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:42.860394 1470331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.860610 1470331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:53:42.861084 1470331 out.go:368] Setting JSON to false
	I0926 22:53:42.862233 1470331 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16567,"bootTime":1758910656,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:42.862297 1470331 start.go:140] virtualization: kvm guest
	I0926 22:53:42.864204 1470331 out.go:179] * [functional-618103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:42.865433 1470331 notify.go:220] Checking for updates...
	I0926 22:53:42.865460 1470331 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:42.866821 1470331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:42.868122 1470331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:53:42.869399 1470331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:53:42.870466 1470331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:42.871525 1470331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:42.873004 1470331 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:53:42.873467 1470331 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:42.897179 1470331 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:53:42.897271 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:42.955533 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:42.944538569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:42.955662 1470331 docker.go:318] overlay module found
	I0926 22:53:42.957493 1470331 out.go:179] * Using the docker driver based on existing profile
	I0926 22:53:42.958658 1470331 start.go:304] selected driver: docker
	I0926 22:53:42.958672 1470331 start.go:924] validating driver "docker" against &{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:42.958752 1470331 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:42.958844 1470331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:43.011560 1470331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:43.002442894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:43.012166 1470331 cni.go:84] Creating CNI manager for ""
	I0926 22:53:43.012228 1470331 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 22:53:43.012283 1470331 start.go:348] cluster config:
	{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:43.014130 1470331 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 26 22:48:36 functional-618103 dockerd[6907]: time="2025-09-26T22:48:36.301310867Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:49:09 functional-618103 dockerd[6907]: time="2025-09-26T22:49:09.306045597Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:49:21 functional-618103 dockerd[6907]: time="2025-09-26T22:49:21.303651792Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:49:22 functional-618103 dockerd[6907]: time="2025-09-26T22:49:22.291936553Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:49:30 functional-618103 dockerd[6907]: time="2025-09-26T22:49:30.293308612Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:50:31 functional-618103 dockerd[6907]: time="2025-09-26T22:50:31.307385776Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:50:42 functional-618103 dockerd[6907]: time="2025-09-26T22:50:42.372873184Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:50:42 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:50:42Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Sep 26 22:50:51 functional-618103 dockerd[6907]: time="2025-09-26T22:50:51.315156165Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:51:01 functional-618103 dockerd[6907]: time="2025-09-26T22:51:01.301369890Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:23 functional-618103 dockerd[6907]: time="2025-09-26T22:53:23.360810568Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:23 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:23Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 26 22:53:28 functional-618103 dockerd[6907]: time="2025-09-26T22:53:28.299496553Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:32 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ecbf501ce8d99633476b94a501b15709ce1e05f42744847ae5b074277ecc8134/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:53:34 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 26 22:53:34 functional-618103 dockerd[6907]: time="2025-09-26T22:53:34.341663334Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:34 functional-618103 dockerd[6907]: time="2025-09-26T22:53:34.378805985Z" level=info msg="ignoring event" container=b70539961c432e23ea0ea9d2f2ceca8cde5a3580e7d978b3be8d3ade4c23bee8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:53:36 functional-618103 dockerd[6907]: time="2025-09-26T22:53:36.276677245Z" level=info msg="ignoring event" container=ecbf501ce8d99633476b94a501b15709ce1e05f42744847ae5b074277ecc8134 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 26 22:53:44 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e6015c4dcc62a36f1fa7a577aff6cd93cca4657d956e07d4c099ac15984ea206/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:53:44 functional-618103 cri-dockerd[7649]: time="2025-09-26T22:53:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/554145ac2092170655e1ec6d97caa1dbb9632fe0c0019acde1beb4875d3b901f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.507016262Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.534888630Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.552473025Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:44 functional-618103 dockerd[6907]: time="2025-09-26T22:53:44.577382631Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 26 22:53:45 functional-618103 dockerd[6907]: time="2025-09-26T22:53:45.290018725Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b70539961c432       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 seconds ago      Exited              mount-munger              0                   ecbf501ce8d99       busybox-mount
	05018b68e7ad4       mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                         5 minutes ago       Running             mysql                     0                   dae83eded4d5b       mysql-5bb876957f-vxwl9
	4b943b553941a       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   3bc210e6c6ac3       coredns-66bc5c9577-6k65s
	ad0540cd441bd       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       3                   d3ee80c4aeda3       storage-provisioner
	235de006a8779       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                2                   48fda113a91e1       kube-proxy-pf9r9
	3ab67dbd917d1       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   aa753dce76f58       kube-apiserver-functional-618103
	35544b6f97ec2       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   2                   2bc0ac715e665       kube-controller-manager-functional-618103
	2b6dc397a1c84       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            3                   ee9b228f455e5       kube-scheduler-functional-618103
	a17fd14faa229       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      2                   bcc4a4a7e9ec6       etcd-functional-618103
	983efcc4c2219       46169d968e920                                                                                         6 minutes ago       Exited              kube-scheduler            2                   fb0acbcbb6fde       kube-scheduler-functional-618103
	931219062fe5d       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       2                   71076a317d56c       storage-provisioner
	6fbe46e6db643       df0860106674d                                                                                         7 minutes ago       Exited              kube-proxy                1                   e383d1a41071d       kube-proxy-pf9r9
	d3e2802fbfa24       a0af72f2ec6d6                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   f9e4d86ab1fb9       kube-controller-manager-functional-618103
	488415e49873e       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   33d7cef7963e2       coredns-66bc5c9577-6k65s
	8881aa79fb35e       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      1                   7f1fdbbb616db       etcd-functional-618103
	
	
	==> coredns [488415e49873] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34736 - 5136 "HINFO IN 4907265284620355639.6271624726996692906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.436648483s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4b943b553941] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51912 - 9260 "HINFO IN 4297598984794218073.7043669035780725794. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02177634s
	
	
	==> describe nodes <==
	Name:               functional-618103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-618103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-618103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_44_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:44:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-618103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:53:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:53:44 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:53:44 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:53:44 +0000   Fri, 26 Sep 2025 22:44:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:53:44 +0000   Fri, 26 Sep 2025 22:44:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-618103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 a01939c13db74c7a972b5645dc883cba
	  System UUID:                11f06017-3556-4c19-9ebb-d79a2382242d
	  Boot ID:                    778ce869-c8a7-4efb-98b6-7ae64ac12ba5
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fzr2x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  default                     hello-node-connect-7d85dfc575-w9ff5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     mysql-5bb876957f-vxwl9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m9s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-6k65s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m46s
	  kube-system                 etcd-functional-618103                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m51s
	  kube-system                 kube-apiserver-functional-618103              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-618103     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 kube-proxy-pf9r9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-scheduler-functional-618103              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-cjqdd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fhn94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m44s                  kube-proxy       
	  Normal   Starting                 6m30s                  kube-proxy       
	  Normal   Starting                 7m22s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  8m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m51s                  kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m51s                  kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m51s                  kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m51s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           8m47s                  node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Normal   RegisteredNode           7m28s                  node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	  Warning  ContainerGCFailed        6m51s (x2 over 7m51s)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node functional-618103 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node functional-618103 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node functional-618103 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m28s                  node-controller  Node functional-618103 event: Registered Node functional-618103 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 22 0c 1a c4 8b 08 06
	[  +1.813176] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 05 27 7d 9f 14 08 06
	[  +0.017756] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 f6 d3 97 e3 ca 08 06
	[  +0.515693] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 10 d3 fe cb 71 08 06
	[ +18.829685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 86 fd b1 a2 03 08 06
	[Sep26 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 47 8d 17 d7 e7 08 06
	[  +0.000516] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[Sep26 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 1b 32 9d 1a 30 08 06
	[  +0.000481] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 14 9e f1 bc cd 08 06
	[  +0.000612] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 3f 5d 98 cd 81 08 06
	[Sep26 22:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e b1 c7 4a e2 6c 08 06
	[Sep26 22:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea bc 84 e9 6e c4 08 06
	[Sep26 22:47] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d7 28 52 72 da 08 06
	
	
	==> etcd [8881aa79fb35] <==
	{"level":"warn","ts":"2025-09-26T22:46:11.662201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.668830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.674812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.690605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.699133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.705381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:11.752971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:58.674439Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:46:58.674530Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:46:58.674631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676133Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:47:05.676235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.676247Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:47:05.676333Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:47:05.676346Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676357Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:47:05.676406Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:47:05.676408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T22:47:05.676420Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679143Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:47:05.679207Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:47:05.679232Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:47:05.679238Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-618103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a17fd14faa22] <==
	{"level":"warn","ts":"2025-09-26T22:47:15.251136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.258665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.266576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.280370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.290315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.296457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.303961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.312495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.318859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.324819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.331796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.338660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.345818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.353890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.361233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.367442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.389077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.396638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.402935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.410587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.418655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.432386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.439253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.445958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:47:15.502995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:53:47 up  4:36,  0 users,  load average: 1.62, 0.87, 1.22
	Linux functional-618103 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3ab67dbd917d] <==
	I0926 22:47:17.418759       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 22:47:17.423871       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 22:47:19.272516       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 22:47:19.571655       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 22:47:19.673750       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0926 22:47:33.062752       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.2.70"}
	I0926 22:47:38.396763       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.30.141"}
	I0926 22:47:41.093036       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.151.149"}
	I0926 22:47:43.522048       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.224.140"}
	E0926 22:47:55.564718       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37350: use of closed network connection
	E0926 22:47:56.728343       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37366: use of closed network connection
	E0926 22:47:58.376463       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37390: use of closed network connection
	I0926 22:47:58.506130       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.213.113"}
	I0926 22:48:15.343140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:36.122427       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:28.174004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:46.447396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:53.418177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:10.150540       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:08.395336       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.260095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:27.812871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:43.903184       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:53:43.987321       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.146.163"}
	I0926 22:53:44.007330       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.253.246"}
	
	
	==> kube-controller-manager [35544b6f97ec] <==
	I0926 22:47:19.269262       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:47:19.269288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:47:19.269310       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:47:19.269345       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:47:19.269354       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:47:19.269355       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 22:47:19.269367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:47:19.269408       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0926 22:47:19.269458       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:47:19.270616       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:47:19.270654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:47:19.270705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:47:19.274641       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0926 22:47:19.275930       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.275977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:47:19.277032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:47:19.278282       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:47:19.282523       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0926 22:47:19.290777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:53:43.946336       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.949739       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.952844       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.955073       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:53:43.958713       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d3e2802fbfa2] <==
	I0926 22:46:19.234504       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:46:19.236815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:46:19.237970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:46:19.238038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:46:19.238071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:46:19.244342       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:46:19.247607       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:46:19.257028       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:46:19.257138       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0926 22:46:19.257161       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:46:19.257187       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:19.257202       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:46:19.257218       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:46:19.257356       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:46:19.257620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:46:19.257635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:46:19.257650       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:46:19.257654       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0926 22:46:19.257809       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:46:19.257998       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-618103"
	I0926 22:46:19.258105       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:46:19.259596       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0926 22:46:19.263705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:46:19.263721       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:46:19.276803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [235de006a877] <==
	I0926 22:47:16.787337       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:47:16.840579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:47:16.941472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:47:16.941535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:47:16.941686       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:47:16.967763       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:47:16.967831       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:47:16.974095       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:47:16.974545       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:47:16.974585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:16.976066       1 config.go:200] "Starting service config controller"
	I0926 22:47:16.976208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:47:16.976069       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:47:16.976066       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:47:16.976284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:47:16.976354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:47:16.976214       1 config.go:309] "Starting node config controller"
	I0926 22:47:16.976844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:47:16.976855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:47:17.076460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:47:17.076543       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:47:17.077883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6fbe46e6db64] <==
	I0926 22:46:24.477902       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:46:24.540344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:46:24.641416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:46:24.641463       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:46:24.641586       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:46:24.663582       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:46:24.663633       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:46:24.668965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:46:24.669437       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:46:24.669458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:24.671249       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:46:24.671283       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:46:24.671340       1 config.go:200] "Starting service config controller"
	I0926 22:46:24.671555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:46:24.671390       1 config.go:309] "Starting node config controller"
	I0926 22:46:24.671594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:46:24.671602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:46:24.671646       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:46:24.671719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:46:24.771534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:46:24.772269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:46:24.772259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2b6dc397a1c8] <==
	I0926 22:47:14.817040       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:15.880591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:47:15.880632       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:47:15.880643       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:15.880656       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:15.894307       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:15.894494       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:47:15.896237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896273       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:15.896546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:47:15.896606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:15.997423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [983efcc4c221] <==
	I0926 22:47:11.056121       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:47:11.419127       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W0926 22:47:11.419160       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:47:11.419168       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:47:11.426056       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:47:11.426079       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0926 22:47:11.426098       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0926 22:47:11.427765       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.427800       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0926 22:47:11.428164       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I0926 22:47:11.428464       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:47:11.428566       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0926 22:47:11.428505       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428606       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:47:11.428772       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:47:11.428793       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:47:11.428828       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:47:11.428851       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 26 22:53:36 functional-618103 kubelet[8543]: I0926 22:53:36.446390    8543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729125ea-6953-4274-aabe-af8ca62eeaf8-kube-api-access-zdb5q" (OuterVolumeSpecName: "kube-api-access-zdb5q") pod "729125ea-6953-4274-aabe-af8ca62eeaf8" (UID: "729125ea-6953-4274-aabe-af8ca62eeaf8"). InnerVolumeSpecName "kube-api-access-zdb5q". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 26 22:53:36 functional-618103 kubelet[8543]: I0926 22:53:36.545567    8543 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/729125ea-6953-4274-aabe-af8ca62eeaf8-test-volume\") on node \"functional-618103\" DevicePath \"\""
	Sep 26 22:53:36 functional-618103 kubelet[8543]: I0926 22:53:36.545607    8543 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdb5q\" (UniqueName: \"kubernetes.io/projected/729125ea-6953-4274-aabe-af8ca62eeaf8-kube-api-access-zdb5q\") on node \"functional-618103\" DevicePath \"\""
	Sep 26 22:53:37 functional-618103 kubelet[8543]: I0926 22:53:37.156349    8543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecbf501ce8d99633476b94a501b15709ce1e05f42744847ae5b074277ecc8134"
	Sep 26 22:53:39 functional-618103 kubelet[8543]: E0926 22:53:39.203074    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-w9ff5" podUID="ae581ca3-c736-4362-b682-e2a0f6c6732e"
	Sep 26 22:53:43 functional-618103 kubelet[8543]: I0926 22:53:43.991091    8543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fns6\" (UniqueName: \"kubernetes.io/projected/9fc37a9c-87d9-4d9c-90a7-eec9a7111622-kube-api-access-5fns6\") pod \"dashboard-metrics-scraper-77bf4d6c4c-cjqdd\" (UID: \"9fc37a9c-87d9-4d9c-90a7-eec9a7111622\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd"
	Sep 26 22:53:43 functional-618103 kubelet[8543]: I0926 22:53:43.991168    8543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9fc37a9c-87d9-4d9c-90a7-eec9a7111622-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-cjqdd\" (UID: \"9fc37a9c-87d9-4d9c-90a7-eec9a7111622\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: I0926 22:53:44.091812    8543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/507a8daf-df4a-4778-ad32-cec1c5b384b9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fhn94\" (UID: \"507a8daf-df4a-4778-ad32-cec1c5b384b9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: I0926 22:53:44.091865    8543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgxd\" (UniqueName: \"kubernetes.io/projected/507a8daf-df4a-4778-ad32-cec1c5b384b9-kube-api-access-qxgxd\") pod \"kubernetes-dashboard-855c9754f9-fhn94\" (UID: \"507a8daf-df4a-4778-ad32-cec1c5b384b9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.537286    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.537343    8543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.537565    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-fhn94_kubernetes-dashboard(507a8daf-df4a-4778-ad32-cec1c5b384b9): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.537629    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.579496    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.579556    8543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.579645    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd_kubernetes-dashboard(9fc37a9c-87d9-4d9c-90a7-eec9a7111622): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:53:44 functional-618103 kubelet[8543]: E0926 22:53:44.579676    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.226914    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjqdd" podUID="9fc37a9c-87d9-4d9c-90a7-eec9a7111622"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.234556    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fhn94" podUID="507a8daf-df4a-4778-ad32-cec1c5b384b9"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.292304    8543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.292355    8543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.292441    8543 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-fzr2x_default(70c9f3fe-65e4-43d6-b79d-26854c072cdb): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:53:45 functional-618103 kubelet[8543]: E0926 22:53:45.292491    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fzr2x" podUID="70c9f3fe-65e4-43d6-b79d-26854c072cdb"
	Sep 26 22:53:46 functional-618103 kubelet[8543]: E0926 22:53:46.202956    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="f724a102-f6d8-4b2f-81d3-f320399fc9ec"
	Sep 26 22:53:46 functional-618103 kubelet[8543]: E0926 22:53:46.204767    8543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b75937ed-7333-4506-a767-ccd53069b9d4"
	
	
	==> storage-provisioner [931219062fe5] <==
	W0926 22:46:33.053707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:36.651697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:39.705759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.727822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.732935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.733086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:46:42.733180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6505a3-eb20-46af-b1ab-54226975775d", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0 became leader
	I0926 22:46:42.733254       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:42.735190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:42.739871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:42.833591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-618103_9239ae2c-6236-4135-9b69-cca0304990a0!
	W0926 22:46:44.743050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:44.747089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.750572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:46.755678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.758889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:48.762954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.766338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:50.770297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.772964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:52.776945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.780028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:54.783787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.787442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:56.791219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad0540cd441b] <==
	W0926 22:53:23.487127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:25.490538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:25.494871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:27.498292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:27.503445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:29.506721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:29.511152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:31.514791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:31.518583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:33.521151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:33.524959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:35.528297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:35.532226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:37.535227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:37.540809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:39.544870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:39.548620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:41.551406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:41.555690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:43.558587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:43.564097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:45.567412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:45.571135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:47.574058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:47.578979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
helpers_test.go:269: (dbg) Run:  kubectl --context functional-618103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1 (93.735768ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:53:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b70539961c432e23ea0ea9d2f2ceca8cde5a3580e7d978b3be8d3ade4c23bee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:53:34 +0000
	      Finished:     Fri, 26 Sep 2025 22:53:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdb5q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zdb5q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15s   default-scheduler  Successfully assigned default/busybox-mount to functional-618103
	  Normal  Pulling    16s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     14s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.484s (1.484s including waiting). Image size: 4403845 bytes.
	  Normal  Created    14s   kubelet            Created container: mount-munger
	  Normal  Started    14s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fzr2x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwg9w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lwg9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m49s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fzr2x to functional-618103
	  Normal   Pulling    2m47s (x5 over 5m50s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m47s (x5 over 5m49s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m47s (x5 over 5m49s)  kubelet            Error: ErrImagePull
	  Warning  Failed     45s (x20 over 5m49s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    31s (x21 over 5m49s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w9ff5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zhs6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zhs6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m4s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w9ff5 to functional-618103
	  Warning  Failed     4m27s (x3 over 5m59s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m6s (x5 over 6m4s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m6s (x5 over 5m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m6s (x2 over 5m14s)   kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x20 over 5m59s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    45s (x21 over 5m59s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfhnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cfhnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m7s                   default-scheduler  Successfully assigned default/nginx-svc to functional-618103
	  Warning  Failed     5m59s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m57s (x5 over 6m6s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m57s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m57s (x4 over 5m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    55s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     55s (x21 over 5m59s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618103/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:47:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jbh5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jbh5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-618103
	  Normal   Pulling    3m17s (x5 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m17s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m17s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     53s (x20 over 5m59s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    39s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-cjqdd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fhn94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-618103 describe pod busybox-mount hello-node-75c85bcc94-fzr2x hello-node-connect-7d85dfc575-w9ff5 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-cjqdd kubernetes-dashboard-855c9754f9-fhn94: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-618103 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b75937ed-7333-4506-a767-ccd53069b9d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-26 22:51:41.40523103 +0000 UTC m=+1371.361213956
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-618103 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-618103 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618103/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:47:41 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfhnj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cfhnj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-618103
Warning  Failed     3m52s                kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    50s (x5 over 3m59s)  kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     50s (x5 over 3m52s)  kubelet            Error: ErrImagePull
Warning  Failed     50s (x4 over 3m38s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3s (x15 over 3m52s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     3s (x15 over 3m52s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-618103 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-618103 logs nginx-svc -n default: exit status 1 (69.211241ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-618103 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-618103 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-618103 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fzr2x" [70c9f3fe-65e4-43d6-b79d-26854c072cdb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0926 22:49:08.239911 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:51:24.380340 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-618103 -n functional-618103
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-26 22:57:58.809391999 +0000 UTC m=+1748.765374940
functional_test.go:1460: (dbg) Run:  kubectl --context functional-618103 describe po hello-node-75c85bcc94-fzr2x -n default
functional_test.go:1460: (dbg) kubectl --context functional-618103 describe po hello-node-75c85bcc94-fzr2x -n default:
Name:             hello-node-75c85bcc94-fzr2x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618103/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:47:58 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwg9w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lwg9w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fzr2x to functional-618103
Normal   Pulling    6m57s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-618103 logs hello-node-75c85bcc94-fzr2x -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-618103 logs hello-node-75c85bcc94-fzr2x -n default: exit status 1 (63.271985ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fzr2x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-618103 logs hello-node-75c85bcc94-fzr2x -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0926 22:51:41.540142 1399974 retry.go:31] will retry after 4.273314163s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:51:45.813987 1399974 retry.go:31] will retry after 2.633853191s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:51:48.448407 1399974 retry.go:31] will retry after 4.517417697s: Temporary Error: Get "http:": http: no Host in request URL
E0926 22:51:52.081880 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0926 22:51:52.966169 1399974 retry.go:31] will retry after 10.751431063s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:52:03.718101 1399974 retry.go:31] will retry after 15.490709514s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:52:19.209607 1399974 retry.go:31] will retry after 32.124807619s: Temporary Error: Get "http:": http: no Host in request URL
I0926 22:52:51.335248 1399974 retry.go:31] will retry after 38.110326407s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-618103 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.111.151.149   10.111.151.149   80:30587/TCP   5m48s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 service --namespace=default --https --url hello-node: exit status 115 (525.568719ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31437
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-618103 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 service hello-node --url --format={{.IP}}: exit status 115 (521.415595ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-618103 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 service hello-node --url: exit status 115 (528.961274ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31437
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-618103 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31437
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    

Test pass (311/346)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.72
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 3.88
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.08
21 TestBinaryMirror 0.79
22 TestOffline 92.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 142.66
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.49
35 TestAddons/parallel/Registry 14.94
36 TestAddons/parallel/RegistryCreds 0.56
38 TestAddons/parallel/InspektorGadget 6.21
39 TestAddons/parallel/MetricsServer 6.55
42 TestAddons/parallel/Headlamp 17.43
43 TestAddons/parallel/CloudSpanner 5.45
45 TestAddons/parallel/NvidiaDevicePlugin 5.42
46 TestAddons/parallel/Yakd 10.61
47 TestAddons/parallel/AmdGpuDevicePlugin 6.42
48 TestAddons/StoppedEnableDisable 11.14
49 TestCertOptions 26.37
50 TestCertExpiration 244.39
51 TestDockerFlags 44.39
52 TestForceSystemdFlag 42.29
53 TestForceSystemdEnv 24.28
55 TestKVMDriverInstallOrUpdate 0.52
59 TestErrorSpam/setup 22.33
60 TestErrorSpam/start 0.6
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.15
63 TestErrorSpam/unpause 1.19
64 TestErrorSpam/stop 10.88
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 65.38
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 55.02
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.06
76 TestFunctional/serial/CacheCmd/cache/add_local 0.7
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.31
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 50.69
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.01
87 TestFunctional/serial/LogsFileCmd 1.03
88 TestFunctional/serial/InvalidService 4.66
90 TestFunctional/parallel/ConfigCmd 0.36
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.9
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.62
103 TestFunctional/parallel/CpCmd 1.9
104 TestFunctional/parallel/MySQL 20.14
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.8
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
114 TestFunctional/parallel/License 0.15
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
120 TestFunctional/parallel/ImageCommands/Setup 0.42
121 TestFunctional/parallel/DockerEnv/bash 1.07
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
144 TestFunctional/parallel/ProfileCmd/profile_list 0.37
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
146 TestFunctional/parallel/MountCmd/any-port 7.49
147 TestFunctional/parallel/MountCmd/specific-port 1.74
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
149 TestFunctional/parallel/Version/short 0.05
150 TestFunctional/parallel/Version/components 0.48
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 93.54
164 TestMultiControlPlane/serial/DeployApp 49.22
165 TestMultiControlPlane/serial/PingHostFromPods 1.13
166 TestMultiControlPlane/serial/AddWorkerNode 14.26
167 TestMultiControlPlane/serial/NodeLabels 0.09
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
169 TestMultiControlPlane/serial/CopyFile 16.76
170 TestMultiControlPlane/serial/StopSecondaryNode 11.46
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
172 TestMultiControlPlane/serial/RestartSecondaryNode 59.49
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 222.74
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.29
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
177 TestMultiControlPlane/serial/StopCluster 32.46
178 TestMultiControlPlane/serial/RestartCluster 106.06
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 31.43
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
184 TestImageBuild/serial/Setup 23.99
185 TestImageBuild/serial/NormalBuild 1.02
186 TestImageBuild/serial/BuildWithBuildArg 0.64
187 TestImageBuild/serial/BuildWithDockerIgnore 0.46
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.47
192 TestJSONOutput/start/Command 69.18
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.49
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.45
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 10.77
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.2
217 TestKicCustomNetwork/create_custom_network 24.91
218 TestKicCustomNetwork/use_default_bridge_network 25.16
219 TestKicExistingNetwork 24.07
220 TestKicCustomSubnet 24
221 TestKicStaticIP 23.78
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 49.63
226 TestMountStart/serial/StartWithMountFirst 7.04
227 TestMountStart/serial/VerifyMountFirst 0.25
228 TestMountStart/serial/StartWithMountSecond 7.4
229 TestMountStart/serial/VerifyMountSecond 0.25
230 TestMountStart/serial/DeleteFirst 1.49
231 TestMountStart/serial/VerifyMountPostDelete 0.25
232 TestMountStart/serial/Stop 1.18
233 TestMountStart/serial/RestartStopped 8.47
234 TestMountStart/serial/VerifyMountPostStop 0.25
237 TestMultiNode/serial/FreshStart2Nodes 55.44
238 TestMultiNode/serial/DeployApp2Nodes 44.08
239 TestMultiNode/serial/PingHostFrom2Pods 0.78
240 TestMultiNode/serial/AddNode 13.76
241 TestMultiNode/serial/MultiNodeLabels 0.06
242 TestMultiNode/serial/ProfileList 0.68
243 TestMultiNode/serial/CopyFile 9.62
244 TestMultiNode/serial/StopNode 2.16
245 TestMultiNode/serial/StartAfterStop 8.62
246 TestMultiNode/serial/RestartKeepsNodes 73.43
247 TestMultiNode/serial/DeleteNode 5.18
248 TestMultiNode/serial/StopMultiNode 21.62
249 TestMultiNode/serial/RestartMultiNode 52.91
250 TestMultiNode/serial/ValidateNameConflict 25.52
255 TestPreload 143.34
257 TestScheduledStopUnix 94.78
258 TestSkaffold 74.06
260 TestInsufficientStorage 9.92
261 TestRunningBinaryUpgrade 56.18
263 TestKubernetesUpgrade 347.66
264 TestMissingContainerUpgrade 68.81
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
267 TestStoppedBinaryUpgrade/Setup 0.53
268 TestNoKubernetes/serial/StartWithK8s 38.64
269 TestStoppedBinaryUpgrade/Upgrade 70.09
270 TestNoKubernetes/serial/StartWithStopK8s 17.63
271 TestNoKubernetes/serial/Start 7.11
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
273 TestNoKubernetes/serial/ProfileList 2.03
274 TestNoKubernetes/serial/Stop 1.23
275 TestNoKubernetes/serial/StartNoArgs 7.66
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
286 TestPause/serial/Start 61.45
298 TestPause/serial/SecondStartNoReconfiguration 52.11
300 TestStartStop/group/old-k8s-version/serial/FirstStart 39.81
301 TestPause/serial/Pause 0.47
302 TestPause/serial/VerifyStatus 0.31
303 TestPause/serial/Unpause 0.47
304 TestPause/serial/PauseAgain 0.54
305 TestPause/serial/DeletePaused 2.15
306 TestStartStop/group/old-k8s-version/serial/DeployApp 8.33
307 TestPause/serial/VerifyDeletedResources 18.73
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
309 TestStartStop/group/old-k8s-version/serial/Stop 10.73
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/old-k8s-version/serial/SecondStart 50.62
313 TestStartStop/group/no-preload/serial/FirstStart 47.99
314 TestStartStop/group/no-preload/serial/DeployApp 9.27
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
318 TestStartStop/group/no-preload/serial/Stop 12.17
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/old-k8s-version/serial/Pause 2.58
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.19
324 TestStartStop/group/newest-cni/serial/FirstStart 31.73
325 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/no-preload/serial/SecondStart 49.77
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
329 TestStartStop/group/newest-cni/serial/Stop 10.12
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
332 TestStartStop/group/newest-cni/serial/SecondStart 12.95
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 2.32
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.02
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
344 TestStartStop/group/embed-certs/serial/FirstStart 66.97
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/no-preload/serial/Pause 2.61
347 TestNetworkPlugins/group/auto/Start 44.35
348 TestNetworkPlugins/group/kindnet/Start 44.43
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
350 TestNetworkPlugins/group/auto/KubeletFlags 0.29
351 TestNetworkPlugins/group/auto/NetCatPod 8.18
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
353 TestNetworkPlugins/group/auto/DNS 0.16
354 TestNetworkPlugins/group/auto/Localhost 0.14
355 TestNetworkPlugins/group/auto/HairPin 0.14
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
358 TestStartStop/group/embed-certs/serial/DeployApp 8.28
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/Start 51.23
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
362 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
364 TestStartStop/group/embed-certs/serial/Stop 11.74
365 TestNetworkPlugins/group/custom-flannel/Start 52.76
366 TestNetworkPlugins/group/kindnet/DNS 0.17
367 TestNetworkPlugins/group/kindnet/Localhost 0.14
368 TestNetworkPlugins/group/kindnet/HairPin 0.14
369 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
370 TestStartStop/group/embed-certs/serial/SecondStart 55.38
371 TestNetworkPlugins/group/false/Start 68.14
372 TestNetworkPlugins/group/calico/ControllerPod 6.02
373 TestNetworkPlugins/group/calico/KubeletFlags 0.32
374 TestNetworkPlugins/group/calico/NetCatPod 10.21
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
377 TestNetworkPlugins/group/calico/DNS 0.13
378 TestNetworkPlugins/group/calico/Localhost 0.15
379 TestNetworkPlugins/group/calico/HairPin 0.14
380 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
381 TestNetworkPlugins/group/custom-flannel/DNS 0.15
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
386 TestStartStop/group/embed-certs/serial/Pause 2.58
387 TestNetworkPlugins/group/enable-default-cni/Start 67.36
388 TestNetworkPlugins/group/flannel/Start 47.13
389 TestNetworkPlugins/group/bridge/Start 67.72
390 TestNetworkPlugins/group/false/KubeletFlags 0.31
391 TestNetworkPlugins/group/false/NetCatPod 9.22
392 TestNetworkPlugins/group/false/DNS 0.18
393 TestNetworkPlugins/group/false/Localhost 0.14
394 TestNetworkPlugins/group/false/HairPin 0.12
395 TestNetworkPlugins/group/kubenet/Start 66.48
396 TestNetworkPlugins/group/flannel/ControllerPod 6.01
397 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
398 TestNetworkPlugins/group/flannel/NetCatPod 10.18
399 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
400 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
401 TestNetworkPlugins/group/flannel/DNS 0.17
402 TestNetworkPlugins/group/flannel/Localhost 0.14
403 TestNetworkPlugins/group/flannel/HairPin 0.14
404 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
405 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
406 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
407 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
408 TestNetworkPlugins/group/bridge/NetCatPod 9.2
409 TestNetworkPlugins/group/bridge/DNS 0.18
410 TestNetworkPlugins/group/bridge/Localhost 0.21
411 TestNetworkPlugins/group/bridge/HairPin 0.17
412 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
413 TestNetworkPlugins/group/kubenet/NetCatPod 10.17
414 TestNetworkPlugins/group/kubenet/DNS 0.14
415 TestNetworkPlugins/group/kubenet/Localhost 0.12
416 TestNetworkPlugins/group/kubenet/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (4.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-036757 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-036757 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.723755438s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0926 22:28:54.808186 1399974 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0926 22:28:54.808310 1399974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-036757
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-036757: exit status 85 (63.435201ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-036757 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-036757 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:28:50
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:28:50.128139 1399986 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:28:50.128403 1399986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:50.128412 1399986 out.go:374] Setting ErrFile to fd 2...
	I0926 22:28:50.128416 1399986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:50.128599 1399986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	W0926 22:28:50.128719 1399986 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21642-1396392/.minikube/config/config.json: open /home/jenkins/minikube-integration/21642-1396392/.minikube/config/config.json: no such file or directory
	I0926 22:28:50.129225 1399986 out.go:368] Setting JSON to true
	I0926 22:28:50.130167 1399986 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15074,"bootTime":1758910656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:28:50.130263 1399986 start.go:140] virtualization: kvm guest
	I0926 22:28:50.132621 1399986 out.go:99] [download-only-036757] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0926 22:28:50.132773 1399986 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 22:28:50.132834 1399986 notify.go:220] Checking for updates...
	I0926 22:28:50.134112 1399986 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:28:50.135658 1399986 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:28:50.137279 1399986 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:28:50.138686 1399986 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:28:50.140049 1399986 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:28:50.142472 1399986 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:28:50.142715 1399986 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:28:50.166923 1399986 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:28:50.167066 1399986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:50.222449 1399986 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-26 22:28:50.21140911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:50.222575 1399986 docker.go:318] overlay module found
	I0926 22:28:50.224327 1399986 out.go:99] Using the docker driver based on user configuration
	I0926 22:28:50.224364 1399986 start.go:304] selected driver: docker
	I0926 22:28:50.224373 1399986 start.go:924] validating driver "docker" against <nil>
	I0926 22:28:50.224564 1399986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:50.276815 1399986 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-26 22:28:50.266598683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:50.276977 1399986 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:28:50.277474 1399986 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:28:50.277650 1399986 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:28:50.279728 1399986 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-036757 host does not exist
	  To start a cluster, run: "minikube start -p download-only-036757"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-036757
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (3.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-040048 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-040048 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.88308509s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (3.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0926 22:28:59.104624 1399974 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0926 22:28:59.104689 1399974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-1396392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-040048
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-040048: exit status 85 (62.211908ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-036757 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-036757 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ delete  │ -p download-only-036757                                                                                                                                                       │ download-only-036757 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │ 26 Sep 25 22:28 UTC │
	│ start   │ -o=json --download-only -p download-only-040048 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-040048 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:28:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:28:55.262191 1400332 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:28:55.262323 1400332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:55.262333 1400332 out.go:374] Setting ErrFile to fd 2...
	I0926 22:28:55.262337 1400332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:55.262580 1400332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:28:55.263123 1400332 out.go:368] Setting JSON to true
	I0926 22:28:55.264046 1400332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15079,"bootTime":1758910656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:28:55.264130 1400332 start.go:140] virtualization: kvm guest
	I0926 22:28:55.266020 1400332 out.go:99] [download-only-040048] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:28:55.266175 1400332 notify.go:220] Checking for updates...
	I0926 22:28:55.267399 1400332 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:28:55.268660 1400332 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:28:55.269919 1400332 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:28:55.271100 1400332 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:28:55.272335 1400332 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:28:55.274489 1400332 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:28:55.274758 1400332 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:28:55.296373 1400332 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:28:55.296523 1400332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:55.350447 1400332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-26 22:28:55.33953517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:55.350605 1400332 docker.go:318] overlay module found
	I0926 22:28:55.352364 1400332 out.go:99] Using the docker driver based on user configuration
	I0926 22:28:55.352396 1400332 start.go:304] selected driver: docker
	I0926 22:28:55.352404 1400332 start.go:924] validating driver "docker" against <nil>
	I0926 22:28:55.352542 1400332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:28:55.405054 1400332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-26 22:28:55.395753298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:28:55.405230 1400332 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:28:55.405735 1400332 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:28:55.405892 1400332 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:28:55.407680 1400332 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-040048 host does not exist
	  To start a cluster, run: "minikube start -p download-only-040048"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-040048
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-193843 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-193843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-193843
--- PASS: TestDownloadOnlyKic (1.08s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 22:29:00.866825 1399974 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-237584 --alsologtostderr --binary-mirror http://127.0.0.1:35911 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-237584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-237584
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (92.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-598358 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-598358 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m28.692870384s)
helpers_test.go:175: Cleaning up "offline-docker-598358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-598358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-598358: (3.466885218s)
--- PASS: TestOffline (92.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-619347
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-619347: exit status 85 (54.316298ms)

                                                
                                                
-- stdout --
	* Profile "addons-619347" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-619347"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-619347
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-619347: exit status 85 (53.556466ms)

                                                
                                                
-- stdout --
	* Profile "addons-619347" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-619347"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (142.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-619347 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.662092739s)
--- PASS: TestAddons/Setup (142.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-619347 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-619347 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-619347 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-619347 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [65d2ae0d-d7b4-4987-ba24-3abc68a23dd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [65d2ae0d-d7b4-4987-ba24-3abc68a23dd8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003993157s
addons_test.go:694: (dbg) Run:  kubectl --context addons-619347 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-619347 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-619347 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.711164ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-gxfpk" [02236731-d4ca-42bf-bb39-ba8fc407b333] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002826556s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vs5xn" [f52ee9a8-d5d7-418f-8f71-2243c5ebfe4a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002620189s
addons_test.go:392: (dbg) Run:  kubectl --context addons-619347 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-619347 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-619347 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.21229175s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 ip
2025/09/26 22:35:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.94s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.351828ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-619347
addons_test.go:332: (dbg) Run:  kubectl --context addons-619347 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9rfhl" [ac146948-09ef-4a91-ba38-fcdc6c13f270] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002885956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.153381ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mjlqr" [18663e65-efc9-4e15-8dad-c4e23a7f7f18] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003726857s
addons_test.go:463: (dbg) Run:  kubectl --context addons-619347 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-619347 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-w98c9" [672c8079-445a-4192-b06c-214a56b4c0ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-w98c9" [672c8079-445a-4192-b06c-214a56b4c0ca] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004091755s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable headlamp --alsologtostderr -v=1: (5.726269288s)
--- PASS: TestAddons/parallel/Headlamp (17.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-2hm5t" [0a4e0ce6-a758-41ad-9c28-6ec556d4d54a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003391429s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-q4gzr" [7f1521a5-454c-418e-b05e-032a88a3e3f4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003938069s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gcppv" [72be23e3-5f93-47d5-b71b-2ebb0f899196] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003809227s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-619347 addons disable yakd --alsologtostderr -v=1: (5.604547089s)
--- PASS: TestAddons/parallel/Yakd (10.61s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-vs4x8" [4b80c3e5-edd1-4ef9-ab2d-9e72b02f0248] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003218525s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-619347 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-619347
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-619347: (10.887371498s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-619347
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-619347
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-619347
--- PASS: TestAddons/StoppedEnableDisable (11.14s)

                                                
                                    
x
+
TestCertOptions (26.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-941167 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-941167 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (23.653714032s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-941167 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-941167 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-941167 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-941167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-941167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-941167: (2.127891354s)
--- PASS: TestCertOptions (26.37s)

                                                
                                    
x
+
TestCertExpiration (244.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693611 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693611 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (26.243538234s)
E0926 23:27:38.439708 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693611 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0926 23:30:56.596824 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693611 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (35.701091049s)
helpers_test.go:175: Cleaning up "cert-expiration-693611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-693611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-693611: (2.443682768s)
--- PASS: TestCertExpiration (244.39s)

                                                
                                    
x
+
TestDockerFlags (44.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-727969 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-727969 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.546281919s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-727969 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-727969 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-727969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-727969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-727969: (2.165023506s)
--- PASS: TestDockerFlags (44.39s)

                                                
                                    
x
+
TestForceSystemdFlag (42.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-625511 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-625511 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.4839539s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-625511 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-625511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-625511
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-625511: (2.455999761s)
--- PASS: TestForceSystemdFlag (42.29s)

                                                
                                    
x
+
TestForceSystemdEnv (24.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-840317 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-840317 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.841062865s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-840317 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-840317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-840317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-840317: (2.119922836s)
--- PASS: TestForceSystemdEnv (24.28s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0926 23:28:41.702957 1399974 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 23:28:41.703098 1399974 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1313514643/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:28:41.731398 1399974 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1313514643/001/docker-machine-driver-kvm2 version is 1.1.1
W0926 23:28:41.731439 1399974 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0926 23:28:41.731554 1399974 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0926 23:28:41.731593 1399974 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1313514643/001/docker-machine-driver-kvm2
I0926 23:28:42.081932 1399974 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1313514643/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:28:42.096551 1399974 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1313514643/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.52s)

                                                
                                    
x
+
TestErrorSpam/setup (22.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-220021 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-220021 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-220021 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-220021 --driver=docker  --container-runtime=docker: (22.325947648s)
--- PASS: TestErrorSpam/setup (22.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 start --dry-run
--- PASS: TestErrorSpam/start (0.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 unpause
--- PASS: TestErrorSpam/unpause (1.19s)

                                                
                                    
x
+
TestErrorSpam/stop (10.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 stop: (10.693561734s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220021 --log_dir /tmp/nospam-220021 stop
--- PASS: TestErrorSpam/stop (10.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21642-1396392/.minikube/files/etc/test/nested/copy/1399974/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-618103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m5.381279414s)
--- PASS: TestFunctional/serial/StartWithProxy (65.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 22:45:40.203947 1399974 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --alsologtostderr -v=8
E0926 22:46:24.380301 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.386762 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.398171 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.419627 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.461021 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.542505 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:24.704755 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:25.026470 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:25.668080 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:26.949395 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:29.510914 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:34.632331 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-618103 --alsologtostderr -v=8: (55.0174281s)
functional_test.go:678: soft start took 55.018220734s for "functional-618103" cluster.
I0926 22:46:35.221834 1399974 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (55.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-618103 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-618103 /tmp/TestFunctionalserialCacheCmdcacheadd_local2830831404/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache add minikube-local-cache-test:functional-618103
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache delete minikube-local-cache-test:functional-618103
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-618103
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.543851ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 kubectl -- --context functional-618103 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-618103 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0926 22:46:44.873824 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:47:05.355783 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-618103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.688459485s)
functional_test.go:776: restart took 50.688611359s for "functional-618103" cluster.
I0926 22:47:30.798933 1399974 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (50.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-618103 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 logs: (1.008752531s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 logs --file /tmp/TestFunctionalserialLogsFileCmd2133043367/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 logs --file /tmp/TestFunctionalserialLogsFileCmd2133043367/001/logs.txt: (1.031208975s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-618103 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-618103
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-618103: exit status 115 (330.27199ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31006 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-618103 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-618103 delete -f testdata/invalidsvc.yaml: (1.164373297s)
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 config get cpus: exit status 14 (60.606796ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 config get cpus: exit status 14 (63.57615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (150.578409ms)

                                                
                                                
-- stdout --
	* [functional-618103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:53:42.711519 1470247 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:42.711785 1470247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.711793 1470247 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:42.711797 1470247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.711966 1470247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:53:42.712459 1470247 out.go:368] Setting JSON to false
	I0926 22:53:42.713397 1470247 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16567,"bootTime":1758910656,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:42.713502 1470247 start.go:140] virtualization: kvm guest
	I0926 22:53:42.715215 1470247 out.go:179] * [functional-618103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:42.716463 1470247 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:42.716463 1470247 notify.go:220] Checking for updates...
	I0926 22:53:42.718917 1470247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:42.720320 1470247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:53:42.721591 1470247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:53:42.722921 1470247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:42.723966 1470247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:42.725388 1470247 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:53:42.725844 1470247 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:42.749372 1470247 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:53:42.749445 1470247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:42.804711 1470247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:42.794761514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:42.804867 1470247 docker.go:318] overlay module found
	I0926 22:53:42.807164 1470247 out.go:179] * Using the docker driver based on existing profile
	I0926 22:53:42.808250 1470247 start.go:304] selected driver: docker
	I0926 22:53:42.808264 1470247 start.go:924] validating driver "docker" against &{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:42.808354 1470247 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:42.810073 1470247 out.go:203] 
	W0926 22:53:42.811227 1470247 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 22:53:42.812214 1470247 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (152.397857ms)

                                                
                                                
-- stdout --
	* [functional-618103] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:53:42.560335 1470165 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:42.560446 1470165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.560455 1470165 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:42.560459 1470165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:42.560806 1470165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 22:53:42.561354 1470165 out.go:368] Setting JSON to false
	I0926 22:53:42.562447 1470165 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16567,"bootTime":1758910656,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:42.562555 1470165 start.go:140] virtualization: kvm guest
	I0926 22:53:42.564875 1470165 out.go:179] * [functional-618103] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:42.566060 1470165 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:42.566095 1470165 notify.go:220] Checking for updates...
	I0926 22:53:42.568319 1470165 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:42.569700 1470165 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	I0926 22:53:42.570755 1470165 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	I0926 22:53:42.574978 1470165 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:42.576222 1470165 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:42.577696 1470165 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 22:53:42.578158 1470165 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:42.602065 1470165 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:53:42.602152 1470165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:42.654347 1470165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-26 22:53:42.644948192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:42.654453 1470165 docker.go:318] overlay module found
	I0926 22:53:42.656229 1470165 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:53:42.657237 1470165 start.go:304] selected driver: docker
	I0926 22:53:42.657253 1470165 start.go:924] validating driver "docker" against &{Name:functional-618103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-618103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:42.657348 1470165 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:42.658895 1470165 out.go:203] 
	W0926 22:53:42.660008 1470165 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:53:42.661166 1470165 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh -n functional-618103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cp functional-618103:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1931392159/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh -n functional-618103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh -n functional-618103 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-618103 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vxwl9" [774f514c-da37-45ad-8c7e-15b2d9f1dacc] Pending
helpers_test.go:352: "mysql-5bb876957f-vxwl9" [774f514c-da37-45ad-8c7e-15b2d9f1dacc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vxwl9" [774f514c-da37-45ad-8c7e-15b2d9f1dacc] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003476577s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-618103 exec mysql-5bb876957f-vxwl9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-618103 exec mysql-5bb876957f-vxwl9 -- mysql -ppassword -e "show databases;": exit status 1 (125.212284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 22:47:55.568281 1399974 retry.go:31] will retry after 1.040622373s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-618103 exec mysql-5bb876957f-vxwl9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-618103 exec mysql-5bb876957f-vxwl9 -- mysql -ppassword -e "show databases;": exit status 1 (121.66779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 22:47:56.731639 1399974 retry.go:31] will retry after 1.531061014s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-618103 exec mysql-5bb876957f-vxwl9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1399974/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /etc/test/nested/copy/1399974/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1399974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /etc/ssl/certs/1399974.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1399974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /usr/share/ca-certificates/1399974.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/13999742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /etc/ssl/certs/13999742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/13999742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /usr/share/ca-certificates/13999742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-618103 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh "sudo systemctl is-active crio": exit status 1 (300.253305ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618103 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-618103
docker.io/kicbase/echo-server:functional-618103
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618103 image ls --format short --alsologtostderr:
I0926 22:53:48.864209 1471878 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:48.864527 1471878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:48.864540 1471878 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:48.864548 1471878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:48.864774 1471878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:48.865408 1471878 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:48.865521 1471878 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:48.865891 1471878 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:48.883720 1471878 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:48.883781 1471878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:48.900848 1471878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:48.993697 1471878 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618103 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ docker.io/kicbase/echo-server               │ functional-618103 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ localhost/my-image                          │ functional-618103 │ bc8fe8daa75e5 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-618103 │ ab4fd4b59fa32 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618103 image ls --format table --alsologtostderr:
I0926 22:53:52.244405 1472359 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:52.244526 1472359 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:52.244536 1472359 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:52.244541 1472359 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:52.244760 1472359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:52.245354 1472359 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:52.245450 1472359 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:52.245825 1472359 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:52.263557 1472359 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:52.263611 1472359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:52.280876 1472359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:52.373557 1472359 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618103 image ls --format json --alsologtostderr:
[{"id":"ab4fd4b59fa32a3e355c5c1142d2bafa5ec1fd08d2d21b785fb9936ee1423583","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-618103"],"size":"30"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-618103"],"size":"4940000"},{"id":"bc8fe8daa75e5a37c423ba19856d5b8d63087ec34002fe31667d7c3960081000","repoDigests":[],"repoTags":["localhost/my-image:functional-618103"],"size":"1240000"},{"id"
:"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["
gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618103 image ls --format json --alsologtostderr:
I0926 22:53:52.042302 1472310 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:52.042593 1472310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:52.042605 1472310 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:52.042611 1472310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:52.042822 1472310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:52.043411 1472310 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:52.043554 1472310 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:52.043946 1472310 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:52.061747 1472310 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:52.061800 1472310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:52.079974 1472310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:52.172533 1472310 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618103 image ls --format yaml --alsologtostderr:
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-618103
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: ab4fd4b59fa32a3e355c5c1142d2bafa5ec1fd08d2d21b785fb9936ee1423583
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-618103
size: "30"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618103 image ls --format yaml --alsologtostderr:
I0926 22:53:49.072236 1471928 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:49.072541 1471928 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:49.072552 1471928 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:49.072557 1471928 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:49.072756 1471928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:49.073372 1471928 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:49.073491 1471928 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:49.073859 1471928 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:49.091504 1471928 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:49.091555 1471928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:49.108908 1471928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:49.202624 1471928 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh pgrep buildkitd: exit status 1 (255.95118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image build -t localhost/my-image:functional-618103 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 image build -t localhost/my-image:functional-618103 testdata/build --alsologtostderr: (2.308203907s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618103 image build -t localhost/my-image:functional-618103 testdata/build --alsologtostderr:
I0926 22:53:49.533416 1472075 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:49.533686 1472075 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:49.533695 1472075 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:49.533699 1472075 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:49.533875 1472075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
I0926 22:53:49.534454 1472075 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:49.535209 1472075 config.go:182] Loaded profile config "functional-618103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0926 22:53:49.535636 1472075 cli_runner.go:164] Run: docker container inspect functional-618103 --format={{.State.Status}}
I0926 22:53:49.553917 1472075 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:49.553972 1472075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-618103
I0926 22:53:49.570864 1472075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/functional-618103/id_rsa Username:docker}
I0926 22:53:49.664825 1472075 build_images.go:161] Building image from path: /tmp/build.1555547973.tar
I0926 22:53:49.664901 1472075 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 22:53:49.674324 1472075 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1555547973.tar
I0926 22:53:49.677766 1472075 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1555547973.tar: stat -c "%s %y" /var/lib/minikube/build/build.1555547973.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1555547973.tar': No such file or directory
I0926 22:53:49.677795 1472075 ssh_runner.go:362] scp /tmp/build.1555547973.tar --> /var/lib/minikube/build/build.1555547973.tar (3072 bytes)
I0926 22:53:49.702423 1472075 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1555547973
I0926 22:53:49.711971 1472075 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1555547973 -xf /var/lib/minikube/build/build.1555547973.tar
I0926 22:53:49.721073 1472075 docker.go:361] Building image: /var/lib/minikube/build/build.1555547973
I0926 22:53:49.721124 1472075 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-618103 /var/lib/minikube/build/build.1555547973
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:bc8fe8daa75e5a37c423ba19856d5b8d63087ec34002fe31667d7c3960081000 done
#8 naming to localhost/my-image:functional-618103 done
#8 DONE 0.0s
I0926 22:53:51.770841 1472075 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-618103 /var/lib/minikube/build/build.1555547973: (2.049695585s)
I0926 22:53:51.770901 1472075 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1555547973
I0926 22:53:51.780604 1472075 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1555547973.tar
I0926 22:53:51.790213 1472075 build_images.go:217] Built localhost/my-image:functional-618103 from /tmp/build.1555547973.tar
I0926 22:53:51.790245 1472075 build_images.go:133] succeeded building to: functional-618103
I0926 22:53:51.790249 1472075 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-618103
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-618103 docker-env) && out/minikube-linux-amd64 status -p functional-618103"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-618103 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image load --daemon kicbase/echo-server:functional-618103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image load --daemon kicbase/echo-server:functional-618103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-618103
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image load --daemon kicbase/echo-server:functional-618103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1462232: os: process already finished
helpers_test.go:525: unable to kill pid 1461975: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image save kicbase/echo-server:functional-618103 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image rm kicbase/echo-server:functional-618103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-618103
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 image save --daemon kicbase/echo-server:functional-618103 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-618103
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-618103 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.18174ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.630207ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "318.074897ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.438433ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdany-port2469741919/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758927210743049400" to /tmp/TestFunctionalparallelMountCmdany-port2469741919/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758927210743049400" to /tmp/TestFunctionalparallelMountCmdany-port2469741919/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758927210743049400" to /tmp/TestFunctionalparallelMountCmdany-port2469741919/001/test-1758927210743049400
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.938853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:53:31.007264 1399974 retry.go:31] will retry after 380.826235ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 26 22:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 26 22:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 26 22:53 test-1758927210743049400
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh cat /mount-9p/test-1758927210743049400
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-618103 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [729125ea-6953-4274-aabe-af8ca62eeaf8] Pending
helpers_test.go:352: "busybox-mount" [729125ea-6953-4274-aabe-af8ca62eeaf8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [729125ea-6953-4274-aabe-af8ca62eeaf8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [729125ea-6953-4274-aabe-af8ca62eeaf8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00377003s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-618103 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdany-port2469741919/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdspecific-port3321653878/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.97086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:53:38.494953 1399974 retry.go:31] will retry after 501.595312ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdspecific-port3321653878/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh "sudo umount -f /mount-9p": exit status 1 (257.2105ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-618103 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdspecific-port3321653878/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T" /mount1: exit status 1 (316.418121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:53:40.291170 1399974 retry.go:31] will retry after 507.322472ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-618103 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618103 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1546455886/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 version -o=json --components
E0926 22:56:24.380952 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 service list: (1.688811317s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-618103 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-618103 service list -o json: (1.683213692s)
functional_test.go:1504: Took "1.683300081s" to run "out/minikube-linux-amd64 -p functional-618103 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-618103
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-618103
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-618103
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (93.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m32.815144055s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (93.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (49.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 kubectl -- rollout status deployment/busybox: (2.839343073s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:24.397685 1399974 retry.go:31] will retry after 1.33438863s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:25.852904 1399974 retry.go:31] will retry after 952.556896ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:26.924258 1399974 retry.go:31] will retry after 1.829824427s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:28.873218 1399974 retry.go:31] will retry after 4.919300899s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:33.918816 1399974 retry.go:31] will retry after 6.028443824s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:40.065739 1399974 retry.go:31] will retry after 5.934674418s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:46.118066 1399974 retry.go:31] will retry after 7.761239292s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0926 23:00:54.023953 1399974 retry.go:31] will retry after 14.638732777s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-gg5pg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-j7hhq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-x8bvq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-gg5pg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-j7hhq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-x8bvq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-gg5pg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-j7hhq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-x8bvq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (49.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-gg5pg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-gg5pg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-j7hhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-j7hhq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-x8bvq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 kubectl -- exec busybox-7b57f96db7-x8bvq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (14.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node add --alsologtostderr -v 5
E0926 23:01:24.380847 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 node add --alsologtostderr -v 5: (13.390844879s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (14.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-381905 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp testdata/cp-test.txt ha-381905:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1269455257/001/cp-test_ha-381905.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905:/home/docker/cp-test.txt ha-381905-m02:/home/docker/cp-test_ha-381905_ha-381905-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test_ha-381905_ha-381905-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905:/home/docker/cp-test.txt ha-381905-m03:/home/docker/cp-test_ha-381905_ha-381905-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test_ha-381905_ha-381905-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905:/home/docker/cp-test.txt ha-381905-m04:/home/docker/cp-test_ha-381905_ha-381905-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test_ha-381905_ha-381905-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp testdata/cp-test.txt ha-381905-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1269455257/001/cp-test_ha-381905-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m02:/home/docker/cp-test.txt ha-381905:/home/docker/cp-test_ha-381905-m02_ha-381905.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test_ha-381905-m02_ha-381905.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m02:/home/docker/cp-test.txt ha-381905-m03:/home/docker/cp-test_ha-381905-m02_ha-381905-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test_ha-381905-m02_ha-381905-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m02:/home/docker/cp-test.txt ha-381905-m04:/home/docker/cp-test_ha-381905-m02_ha-381905-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test_ha-381905-m02_ha-381905-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp testdata/cp-test.txt ha-381905-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1269455257/001/cp-test_ha-381905-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m03:/home/docker/cp-test.txt ha-381905:/home/docker/cp-test_ha-381905-m03_ha-381905.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test_ha-381905-m03_ha-381905.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m03:/home/docker/cp-test.txt ha-381905-m02:/home/docker/cp-test_ha-381905-m03_ha-381905-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test_ha-381905-m03_ha-381905-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m03:/home/docker/cp-test.txt ha-381905-m04:/home/docker/cp-test_ha-381905-m03_ha-381905-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test_ha-381905-m03_ha-381905-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp testdata/cp-test.txt ha-381905-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1269455257/001/cp-test_ha-381905-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m04:/home/docker/cp-test.txt ha-381905:/home/docker/cp-test_ha-381905-m04_ha-381905.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905 "sudo cat /home/docker/cp-test_ha-381905-m04_ha-381905.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m04:/home/docker/cp-test.txt ha-381905-m02:/home/docker/cp-test_ha-381905-m04_ha-381905-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m02 "sudo cat /home/docker/cp-test_ha-381905-m04_ha-381905-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 cp ha-381905-m04:/home/docker/cp-test.txt ha-381905-m03:/home/docker/cp-test_ha-381905-m04_ha-381905-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 ssh -n ha-381905-m03 "sudo cat /home/docker/cp-test_ha-381905-m04_ha-381905-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 node stop m02 --alsologtostderr -v 5: (10.779740382s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5: exit status 7 (674.548993ms)

                                                
                                                
-- stdout --
	ha-381905
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-381905-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-381905-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-381905-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:01:54.529553 1504152 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:01:54.529794 1504152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:01:54.529801 1504152 out.go:374] Setting ErrFile to fd 2...
	I0926 23:01:54.529806 1504152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:01:54.529987 1504152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 23:01:54.530153 1504152 out.go:368] Setting JSON to false
	I0926 23:01:54.530198 1504152 mustload.go:65] Loading cluster: ha-381905
	I0926 23:01:54.530266 1504152 notify.go:220] Checking for updates...
	I0926 23:01:54.530600 1504152 config.go:182] Loaded profile config "ha-381905": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 23:01:54.530616 1504152 status.go:174] checking status of ha-381905 ...
	I0926 23:01:54.531038 1504152 cli_runner.go:164] Run: docker container inspect ha-381905 --format={{.State.Status}}
	I0926 23:01:54.549994 1504152 status.go:371] ha-381905 host status = "Running" (err=<nil>)
	I0926 23:01:54.550015 1504152 host.go:66] Checking if "ha-381905" exists ...
	I0926 23:01:54.550254 1504152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-381905
	I0926 23:01:54.567358 1504152 host.go:66] Checking if "ha-381905" exists ...
	I0926 23:01:54.567653 1504152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:01:54.567692 1504152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-381905
	I0926 23:01:54.584916 1504152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/ha-381905/id_rsa Username:docker}
	I0926 23:01:54.678783 1504152 ssh_runner.go:195] Run: systemctl --version
	I0926 23:01:54.683435 1504152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:01:54.694861 1504152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:01:54.749095 1504152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:01:54.739034801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:01:54.749788 1504152 kubeconfig.go:125] found "ha-381905" server: "https://192.168.49.254:8443"
	I0926 23:01:54.749825 1504152 api_server.go:166] Checking apiserver status ...
	I0926 23:01:54.749864 1504152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:01:54.762732 1504152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	W0926 23:01:54.772691 1504152 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:01:54.772755 1504152 ssh_runner.go:195] Run: ls
	I0926 23:01:54.776296 1504152 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 23:01:54.780590 1504152 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 23:01:54.780615 1504152 status.go:463] ha-381905 apiserver status = Running (err=<nil>)
	I0926 23:01:54.780627 1504152 status.go:176] ha-381905 status: &{Name:ha-381905 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:01:54.780641 1504152 status.go:174] checking status of ha-381905-m02 ...
	I0926 23:01:54.780895 1504152 cli_runner.go:164] Run: docker container inspect ha-381905-m02 --format={{.State.Status}}
	I0926 23:01:54.798681 1504152 status.go:371] ha-381905-m02 host status = "Stopped" (err=<nil>)
	I0926 23:01:54.798699 1504152 status.go:384] host is not running, skipping remaining checks
	I0926 23:01:54.798706 1504152 status.go:176] ha-381905-m02 status: &{Name:ha-381905-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:01:54.798723 1504152 status.go:174] checking status of ha-381905-m03 ...
	I0926 23:01:54.798942 1504152 cli_runner.go:164] Run: docker container inspect ha-381905-m03 --format={{.State.Status}}
	I0926 23:01:54.816955 1504152 status.go:371] ha-381905-m03 host status = "Running" (err=<nil>)
	I0926 23:01:54.816979 1504152 host.go:66] Checking if "ha-381905-m03" exists ...
	I0926 23:01:54.817242 1504152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-381905-m03
	I0926 23:01:54.836326 1504152 host.go:66] Checking if "ha-381905-m03" exists ...
	I0926 23:01:54.836627 1504152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:01:54.836676 1504152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-381905-m03
	I0926 23:01:54.854337 1504152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33906 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/ha-381905-m03/id_rsa Username:docker}
	I0926 23:01:54.948709 1504152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:01:54.962357 1504152 kubeconfig.go:125] found "ha-381905" server: "https://192.168.49.254:8443"
	I0926 23:01:54.962383 1504152 api_server.go:166] Checking apiserver status ...
	I0926 23:01:54.962414 1504152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:01:54.974188 1504152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2184/cgroup
	W0926 23:01:54.984183 1504152 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:01:54.984235 1504152 ssh_runner.go:195] Run: ls
	I0926 23:01:54.987914 1504152 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 23:01:54.994548 1504152 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 23:01:54.994572 1504152 status.go:463] ha-381905-m03 apiserver status = Running (err=<nil>)
	I0926 23:01:54.994583 1504152 status.go:176] ha-381905-m03 status: &{Name:ha-381905-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:01:54.994603 1504152 status.go:174] checking status of ha-381905-m04 ...
	I0926 23:01:54.994835 1504152 cli_runner.go:164] Run: docker container inspect ha-381905-m04 --format={{.State.Status}}
	I0926 23:01:55.012886 1504152 status.go:371] ha-381905-m04 host status = "Running" (err=<nil>)
	I0926 23:01:55.012912 1504152 host.go:66] Checking if "ha-381905-m04" exists ...
	I0926 23:01:55.013160 1504152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-381905-m04
	I0926 23:01:55.031023 1504152 host.go:66] Checking if "ha-381905-m04" exists ...
	I0926 23:01:55.031357 1504152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:01:55.031404 1504152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-381905-m04
	I0926 23:01:55.049451 1504152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/ha-381905-m04/id_rsa Username:docker}
	I0926 23:01:55.142641 1504152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:01:55.155073 1504152 status.go:176] ha-381905-m04 status: &{Name:ha-381905-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node start m02 --alsologtostderr -v 5
E0926 23:02:38.439862 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.446286 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.457694 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.479089 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.520514 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.601962 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:38.763510 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:39.085204 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:39.727342 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:41.009186 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:43.570937 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:47.443931 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:48.692429 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 node start m02 --alsologtostderr -v 5: (58.523161121s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (59.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 stop --alsologtostderr -v 5
E0926 23:02:58.933783 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:03:19.415699 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 stop --alsologtostderr -v 5: (33.5040454s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 start --wait true --alsologtostderr -v 5
E0926 23:04:00.378644 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:05:22.302851 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:06:24.380769 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 start --wait true --alsologtostderr -v 5: (3m9.131810342s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 node delete m03 --alsologtostderr -v 5: (8.460567232s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 stop --alsologtostderr -v 5: (32.355350786s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5: exit status 7 (104.101066ms)

                                                
                                                
-- stdout --
	ha-381905
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-381905-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-381905-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:07:21.442529 1535957 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:07:21.442624 1535957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:21.442629 1535957 out.go:374] Setting ErrFile to fd 2...
	I0926 23:07:21.442633 1535957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:21.442828 1535957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 23:07:21.442993 1535957 out.go:368] Setting JSON to false
	I0926 23:07:21.443043 1535957 mustload.go:65] Loading cluster: ha-381905
	I0926 23:07:21.443144 1535957 notify.go:220] Checking for updates...
	I0926 23:07:21.443401 1535957 config.go:182] Loaded profile config "ha-381905": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 23:07:21.443415 1535957 status.go:174] checking status of ha-381905 ...
	I0926 23:07:21.443840 1535957 cli_runner.go:164] Run: docker container inspect ha-381905 --format={{.State.Status}}
	I0926 23:07:21.462880 1535957 status.go:371] ha-381905 host status = "Stopped" (err=<nil>)
	I0926 23:07:21.462900 1535957 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:21.462907 1535957 status.go:176] ha-381905 status: &{Name:ha-381905 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:07:21.462949 1535957 status.go:174] checking status of ha-381905-m02 ...
	I0926 23:07:21.463209 1535957 cli_runner.go:164] Run: docker container inspect ha-381905-m02 --format={{.State.Status}}
	I0926 23:07:21.480906 1535957 status.go:371] ha-381905-m02 host status = "Stopped" (err=<nil>)
	I0926 23:07:21.480944 1535957 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:21.480953 1535957 status.go:176] ha-381905-m02 status: &{Name:ha-381905-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:07:21.480981 1535957 status.go:174] checking status of ha-381905-m04 ...
	I0926 23:07:21.481285 1535957 cli_runner.go:164] Run: docker container inspect ha-381905-m04 --format={{.State.Status}}
	I0926 23:07:21.498548 1535957 status.go:371] ha-381905-m04 host status = "Stopped" (err=<nil>)
	I0926 23:07:21.498570 1535957 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:21.498578 1535957 status.go:176] ha-381905-m04 status: &{Name:ha-381905-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (106.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0926 23:07:38.439967 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:08:06.145521 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m45.280473692s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (106.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (31.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-381905 node add --control-plane --alsologtostderr -v 5: (30.483327932s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-381905 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (31.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.007601418s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-560596 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-560596 --driver=docker  --container-runtime=docker: (23.985529485s)
--- PASS: TestImageBuild/serial/Setup (23.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-560596
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-560596: (1.016245891s)
--- PASS: TestImageBuild/serial/NormalBuild (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-560596
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-560596
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-560596
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-706340 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0926 23:11:24.384919 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-706340 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m9.180779844s)
--- PASS: TestJSONOutput/start/Command (69.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-706340 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-706340 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-706340 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-706340 --output=json --user=testUser: (10.76601764s)
--- PASS: TestJSONOutput/stop/Command (10.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-885121 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-885121 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.429983ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e79a5e8-bbad-4048-8745-8d76772c5dd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-885121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"70724a83-178e-4866-9222-620118e4be83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"c838b883-bc04-4072-ace9-e5642fc12104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"46f67f26-1d7c-429f-838d-98a5c01db777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig"}}
	{"specversion":"1.0","id":"075a694a-f9dd-43cb-a657-df5ff970406b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube"}}
	{"specversion":"1.0","id":"998c1352-1852-4c96-a76a-7ef9f0458e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4c3f30ac-5c8f-4557-8910-3cb060b0eed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c60f7d2-008e-4f53-b8a5-ad4adab6be1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-885121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-885121
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-969313 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-969313 --network=: (22.779742299s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-969313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-969313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-969313: (2.106810951s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-723882 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-723882 --network=bridge: (23.189937238s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-723882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-723882
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-723882: (1.946231036s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.16s)

                                                
                                    
x
+
TestKicExistingNetwork (24.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0926 23:12:30.057250 1399974 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0926 23:12:30.074023 1399974 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0926 23:12:30.074089 1399974 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0926 23:12:30.074107 1399974 cli_runner.go:164] Run: docker network inspect existing-network
W0926 23:12:30.090452 1399974 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0926 23:12:30.090493 1399974 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0926 23:12:30.090512 1399974 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0926 23:12:30.090670 1399974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0926 23:12:30.108191 1399974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cd010d5dd3e9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:d0:98:4d:29:8e} reservation:<nil>}
I0926 23:12:30.108605 1399974 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015472b0}
I0926 23:12:30.108642 1399974 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0926 23:12:30.108689 1399974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0926 23:12:30.164644 1399974 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-668627 --network=existing-network
E0926 23:12:38.441674 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-668627 --network=existing-network: (21.986730434s)
helpers_test.go:175: Cleaning up "existing-network-668627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-668627
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-668627: (1.941624613s)
I0926 23:12:54.111159 1399974 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.07s)

                                                
                                    
x
+
TestKicCustomSubnet (24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-164728 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-164728 --subnet=192.168.60.0/24: (21.863149013s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-164728 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-164728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-164728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-164728: (2.119026953s)
--- PASS: TestKicCustomSubnet (24.00s)

                                                
                                    
x
+
TestKicStaticIP (23.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-588985 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-588985 --static-ip=192.168.200.200: (21.556044207s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-588985 ip
helpers_test.go:175: Cleaning up "static-ip-588985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-588985
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-588985: (2.086643111s)
--- PASS: TestKicStaticIP (23.78s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-332640 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-332640 --driver=docker  --container-runtime=docker: (21.61228511s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-344282 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-344282 --driver=docker  --container-runtime=docker: (22.660597449s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-332640
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-344282
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-344282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-344282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-344282: (2.102916924s)
helpers_test.go:175: Cleaning up "first-332640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-332640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-332640: (2.091306196s)
--- PASS: TestMinikubeProfile (49.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-773204 --memory=3072 --mount-string /tmp/TestMountStartserial2779116892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-773204 --memory=3072 --mount-string /tmp/TestMountStartserial2779116892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.037626173s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-773204 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-790497 --memory=3072 --mount-string /tmp/TestMountStartserial2779116892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-790497 --memory=3072 --mount-string /tmp/TestMountStartserial2779116892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.402544581s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-790497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-773204 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-773204 --alsologtostderr -v=5: (1.493074007s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-790497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-790497
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-790497: (1.177585309s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-790497
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-790497: (7.467163886s)
--- PASS: TestMountStart/serial/RestartStopped (8.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-790497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (55.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048852 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048852 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (54.988991327s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (55.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-048852 -- rollout status deployment/busybox: (2.52645573s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:15:58.057272 1399974 retry.go:31] will retry after 1.180494505s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:15:59.350538 1399974 retry.go:31] will retry after 1.336587518s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:00.800519 1399974 retry.go:31] will retry after 1.576901269s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:02.493152 1399974 retry.go:31] will retry after 4.473618207s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:07.083794 1399974 retry.go:31] will retry after 5.723043893s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:12.927461 1399974 retry.go:31] will retry after 6.354372805s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:19.408153 1399974 retry.go:31] will retry after 5.94947169s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0926 23:16:24.383859 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0926 23:16:25.481149 1399974 retry.go:31] will retry after 12.583731647s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-7kg6r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-9zncg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-7kg6r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-9zncg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-7kg6r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-9zncg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-7kg6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-7kg6r -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-9zncg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048852 -- exec busybox-7b57f96db7-9zncg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048852 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-048852 -v=5 --alsologtostderr: (13.139215804s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-048852 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp testdata/cp-test.txt multinode-048852:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2679700538/001/cp-test_multinode-048852.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852:/home/docker/cp-test.txt multinode-048852-m02:/home/docker/cp-test_multinode-048852_multinode-048852-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test_multinode-048852_multinode-048852-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852:/home/docker/cp-test.txt multinode-048852-m03:/home/docker/cp-test_multinode-048852_multinode-048852-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test_multinode-048852_multinode-048852-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp testdata/cp-test.txt multinode-048852-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2679700538/001/cp-test_multinode-048852-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m02:/home/docker/cp-test.txt multinode-048852:/home/docker/cp-test_multinode-048852-m02_multinode-048852.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test_multinode-048852-m02_multinode-048852.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m02:/home/docker/cp-test.txt multinode-048852-m03:/home/docker/cp-test_multinode-048852-m02_multinode-048852-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test_multinode-048852-m02_multinode-048852-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp testdata/cp-test.txt multinode-048852-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2679700538/001/cp-test_multinode-048852-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m03:/home/docker/cp-test.txt multinode-048852:/home/docker/cp-test_multinode-048852-m03_multinode-048852.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852 "sudo cat /home/docker/cp-test_multinode-048852-m03_multinode-048852.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 cp multinode-048852-m03:/home/docker/cp-test.txt multinode-048852-m02:/home/docker/cp-test_multinode-048852-m03_multinode-048852-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 ssh -n multinode-048852-m02 "sudo cat /home/docker/cp-test_multinode-048852-m03_multinode-048852-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-048852 node stop m03: (1.216093197s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048852 status: exit status 7 (471.743289ms)

                                                
                                                
-- stdout --
	multinode-048852
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048852-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048852-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr: exit status 7 (472.896251ms)

                                                
                                                
-- stdout --
	multinode-048852
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048852-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048852-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:17:05.982966 1619732 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:17:05.983067 1619732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:17:05.983075 1619732 out.go:374] Setting ErrFile to fd 2...
	I0926 23:17:05.983079 1619732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:17:05.983250 1619732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 23:17:05.983411 1619732 out.go:368] Setting JSON to false
	I0926 23:17:05.983451 1619732 mustload.go:65] Loading cluster: multinode-048852
	I0926 23:17:05.983531 1619732 notify.go:220] Checking for updates...
	I0926 23:17:05.983837 1619732 config.go:182] Loaded profile config "multinode-048852": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 23:17:05.983855 1619732 status.go:174] checking status of multinode-048852 ...
	I0926 23:17:05.984242 1619732 cli_runner.go:164] Run: docker container inspect multinode-048852 --format={{.State.Status}}
	I0926 23:17:06.003797 1619732 status.go:371] multinode-048852 host status = "Running" (err=<nil>)
	I0926 23:17:06.003846 1619732 host.go:66] Checking if "multinode-048852" exists ...
	I0926 23:17:06.004180 1619732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048852
	I0926 23:17:06.021704 1619732 host.go:66] Checking if "multinode-048852" exists ...
	I0926 23:17:06.021937 1619732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:17:06.021980 1619732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048852
	I0926 23:17:06.039162 1619732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34021 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/multinode-048852/id_rsa Username:docker}
	I0926 23:17:06.132814 1619732 ssh_runner.go:195] Run: systemctl --version
	I0926 23:17:06.137037 1619732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:17:06.148552 1619732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:17:06.201789 1619732 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-26 23:17:06.191132811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:17:06.202439 1619732 kubeconfig.go:125] found "multinode-048852" server: "https://192.168.67.2:8443"
	I0926 23:17:06.202474 1619732 api_server.go:166] Checking apiserver status ...
	I0926 23:17:06.202541 1619732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:17:06.214721 1619732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup
	W0926 23:17:06.224041 1619732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:17:06.224106 1619732 ssh_runner.go:195] Run: ls
	I0926 23:17:06.227547 1619732 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0926 23:17:06.232352 1619732 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0926 23:17:06.232375 1619732 status.go:463] multinode-048852 apiserver status = Running (err=<nil>)
	I0926 23:17:06.232385 1619732 status.go:176] multinode-048852 status: &{Name:multinode-048852 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:17:06.232402 1619732 status.go:174] checking status of multinode-048852-m02 ...
	I0926 23:17:06.232683 1619732 cli_runner.go:164] Run: docker container inspect multinode-048852-m02 --format={{.State.Status}}
	I0926 23:17:06.251149 1619732 status.go:371] multinode-048852-m02 host status = "Running" (err=<nil>)
	I0926 23:17:06.251173 1619732 host.go:66] Checking if "multinode-048852-m02" exists ...
	I0926 23:17:06.251405 1619732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048852-m02
	I0926 23:17:06.267712 1619732 host.go:66] Checking if "multinode-048852-m02" exists ...
	I0926 23:17:06.267983 1619732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:17:06.268027 1619732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048852-m02
	I0926 23:17:06.285789 1619732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/21642-1396392/.minikube/machines/multinode-048852-m02/id_rsa Username:docker}
	I0926 23:17:06.377732 1619732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:17:06.389205 1619732 status.go:176] multinode-048852-m02 status: &{Name:multinode-048852-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:17:06.389243 1619732 status.go:174] checking status of multinode-048852-m03 ...
	I0926 23:17:06.389526 1619732 cli_runner.go:164] Run: docker container inspect multinode-048852-m03 --format={{.State.Status}}
	I0926 23:17:06.406950 1619732 status.go:371] multinode-048852-m03 host status = "Stopped" (err=<nil>)
	I0926 23:17:06.406972 1619732 status.go:384] host is not running, skipping remaining checks
	I0926 23:17:06.406979 1619732 status.go:176] multinode-048852-m03 status: &{Name:multinode-048852-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-048852 node start m03 -v=5 --alsologtostderr: (7.938166079s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048852
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-048852
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-048852: (22.54604621s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048852 --wait=true -v=5 --alsologtostderr
E0926 23:17:38.440217 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048852 --wait=true -v=5 --alsologtostderr: (50.781562996s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048852
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-048852 node delete m03: (4.605485072s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-048852 stop: (21.443652938s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048852 status: exit status 7 (84.720416ms)

                                                
                                                
-- stdout --
	multinode-048852
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048852-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr: exit status 7 (87.100695ms)

                                                
                                                
-- stdout --
	multinode-048852
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048852-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:18:55.212833 1634478 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:18:55.212947 1634478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:18:55.212959 1634478 out.go:374] Setting ErrFile to fd 2...
	I0926 23:18:55.212965 1634478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:18:55.213169 1634478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-1396392/.minikube/bin
	I0926 23:18:55.213357 1634478 out.go:368] Setting JSON to false
	I0926 23:18:55.213398 1634478 mustload.go:65] Loading cluster: multinode-048852
	I0926 23:18:55.213501 1634478 notify.go:220] Checking for updates...
	I0926 23:18:55.213807 1634478 config.go:182] Loaded profile config "multinode-048852": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0926 23:18:55.213824 1634478 status.go:174] checking status of multinode-048852 ...
	I0926 23:18:55.214243 1634478 cli_runner.go:164] Run: docker container inspect multinode-048852 --format={{.State.Status}}
	I0926 23:18:55.235909 1634478 status.go:371] multinode-048852 host status = "Stopped" (err=<nil>)
	I0926 23:18:55.235929 1634478 status.go:384] host is not running, skipping remaining checks
	I0926 23:18:55.235934 1634478 status.go:176] multinode-048852 status: &{Name:multinode-048852 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:18:55.235956 1634478 status.go:174] checking status of multinode-048852-m02 ...
	I0926 23:18:55.236196 1634478 cli_runner.go:164] Run: docker container inspect multinode-048852-m02 --format={{.State.Status}}
	I0926 23:18:55.253002 1634478 status.go:371] multinode-048852-m02 host status = "Stopped" (err=<nil>)
	I0926 23:18:55.253025 1634478 status.go:384] host is not running, skipping remaining checks
	I0926 23:18:55.253031 1634478 status.go:176] multinode-048852-m02 status: &{Name:multinode-048852-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048852 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0926 23:19:01.507056 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:19:27.446101 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048852 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (52.322974394s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048852 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048852
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048852-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-048852-m02 --driver=docker  --container-runtime=docker: exit status 14 (67.333826ms)

                                                
                                                
-- stdout --
	* [multinode-048852-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-048852-m02' is duplicated with machine name 'multinode-048852-m02' in profile 'multinode-048852'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048852-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048852-m03 --driver=docker  --container-runtime=docker: (23.025403299s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048852
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-048852: exit status 80 (273.680968ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-048852 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-048852-m03 already exists in multinode-048852-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-048852-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-048852-m03: (2.101452104s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.52s)

                                                
                                    
x
+
TestPreload (143.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-948579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E0926 23:21:24.380397 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-948579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m11.490577975s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-948579 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-948579 image pull gcr.io/k8s-minikube/busybox: (1.605125584s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-948579
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-948579: (10.690012276s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-948579 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0926 23:22:38.440386 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-948579 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (57.155377568s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-948579 image list
helpers_test.go:175: Cleaning up "test-preload-948579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-948579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-948579: (2.194007252s)
--- PASS: TestPreload (143.34s)

                                                
                                    
x
+
TestScheduledStopUnix (94.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-090392 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-090392 --memory=3072 --driver=docker  --container-runtime=docker: (21.789019664s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090392 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-090392 -n scheduled-stop-090392
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090392 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0926 23:23:03.149686 1399974 retry.go:31] will retry after 103.202µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.150851 1399974 retry.go:31] will retry after 93.488µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.151987 1399974 retry.go:31] will retry after 123.39µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.153115 1399974 retry.go:31] will retry after 277.762µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.154245 1399974 retry.go:31] will retry after 558.863µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.155376 1399974 retry.go:31] will retry after 707.218µs: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.156518 1399974 retry.go:31] will retry after 1.679131ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.158706 1399974 retry.go:31] will retry after 1.97623ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.160915 1399974 retry.go:31] will retry after 1.594615ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.163105 1399974 retry.go:31] will retry after 4.736255ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.168306 1399974 retry.go:31] will retry after 3.992104ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.172510 1399974 retry.go:31] will retry after 11.386509ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.184716 1399974 retry.go:31] will retry after 15.447437ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.200964 1399974 retry.go:31] will retry after 27.85641ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
I0926 23:23:03.229202 1399974 retry.go:31] will retry after 18.264731ms: open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/scheduled-stop-090392/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090392 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090392 -n scheduled-stop-090392
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-090392
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090392 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-090392
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-090392: exit status 7 (68.512084ms)

                                                
                                                
-- stdout --
	scheduled-stop-090392
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090392 -n scheduled-stop-090392
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090392 -n scheduled-stop-090392: exit status 7 (69.365233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-090392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-090392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-090392: (1.646158621s)
--- PASS: TestScheduledStopUnix (94.78s)

                                                
                                    
x
+
TestSkaffold (74.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe957794477 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-407544 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-407544 --memory=3072 --driver=docker  --container-runtime=docker: (22.155489676s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe957794477 run --minikube-profile skaffold-407544 --kube-context skaffold-407544 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe957794477 run --minikube-profile skaffold-407544 --kube-context skaffold-407544 --status-check=true --port-forward=false --interactive=false: (37.020983211s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-55c86797bb-tv5bg" [23d0fa29-c954-45a0-9ca4-140dee18cb72] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003147793s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-748dd49d65-chjzj" [8173cb4c-1797-4323-a412-9e5ad08d849d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003400231s
helpers_test.go:175: Cleaning up "skaffold-407544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-407544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-407544: (3.151637155s)
--- PASS: TestSkaffold (74.06s)

                                                
                                    
x
+
TestInsufficientStorage (9.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-870054 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-870054 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.716208353s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d064ec4-a4c6-471e-aeb5-b676d21a0339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-870054] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d526a7a0-af47-408b-8e61-1403908d12c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"5894a575-1fab-4540-9d59-663b491ff69b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2fb512b6-1daa-4959-b996-b6d7106da452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig"}}
	{"specversion":"1.0","id":"f1ae3a90-a14c-4be0-a1aa-fb1968a5adf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube"}}
	{"specversion":"1.0","id":"dfd844a5-f227-4c0a-b753-1ecc87f579be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"da450759-3d12-440c-96bf-9c96745ea938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b71c8344-b8f0-403c-b781-6434ca496c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cea41076-d0b2-4048-a28a-9422b7a3a800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bcb3771e-b2b0-4972-9c50-b158898607cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbe6749b-e194-4407-9cdf-bc19482d8846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"821736de-c4b9-4c9c-b598-f6351ae00154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-870054\" primary control-plane node in \"insufficient-storage-870054\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5672dcbc-a3d4-4d93-8b83-fab8e4b01bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"881a5e7f-ac7a-429e-be17-266ab66c2374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"26e6cae6-dd2d-467f-9b39-32ed135e3c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-870054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-870054 --output=json --layout=cluster: exit status 7 (272.46508ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-870054","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-870054","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:25:37.771046 1672763 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-870054" does not appear in /home/jenkins/minikube-integration/21642-1396392/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-870054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-870054 --output=json --layout=cluster: exit status 7 (275.682831ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-870054","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-870054","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:25:38.047114 1672869 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-870054" does not appear in /home/jenkins/minikube-integration/21642-1396392/kubeconfig
	E0926 23:25:38.058203 1672869 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/insufficient-storage-870054/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-870054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-870054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-870054: (1.658194571s)
--- PASS: TestInsufficientStorage (9.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4253512234 start -p running-upgrade-500707 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4253512234 start -p running-upgrade-500707 --memory=3072 --vm-driver=docker  --container-runtime=docker: (25.504440821s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-500707 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-500707 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.859082159s)
helpers_test.go:175: Cleaning up "running-upgrade-500707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-500707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-500707: (2.349486398s)
--- PASS: TestRunningBinaryUpgrade (56.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.528939701s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-920645
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-920645: (11.863041035s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-920645 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-920645 status --format={{.Host}}: exit status 7 (93.783849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.492497592s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-920645 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (66.686334ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-920645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-920645
	    minikube start -p kubernetes-upgrade-920645 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9206452 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-920645 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-920645 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.054658025s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-920645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-920645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-920645: (2.496037221s)
--- PASS: TestKubernetesUpgrade (347.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (68.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2193296836 start -p missing-upgrade-653663 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2193296836 start -p missing-upgrade-653663 --memory=3072 --driver=docker  --container-runtime=docker: (22.895619613s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-653663
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-653663: (1.687478676s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-653663
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-653663 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-653663 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.958919833s)
helpers_test.go:175: Cleaning up "missing-upgrade-653663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-653663
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-653663: (1.775882568s)
--- PASS: TestMissingContainerUpgrade (68.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (83.260802ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-620590] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-1396392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-1396392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.249762861s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620590 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2534235767 start -p stopped-upgrade-702527 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2534235767 start -p stopped-upgrade-702527 --memory=3072 --vm-driver=docker  --container-runtime=docker: (42.215820589s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2534235767 -p stopped-upgrade-702527 stop
E0926 23:26:24.381157 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2534235767 -p stopped-upgrade-702527 stop: (10.747990153s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-702527 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-702527 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (17.130311757s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.589575057s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620590 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-620590 status -o json: exit status 2 (326.688046ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-620590","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-620590
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-620590: (1.711037128s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.114643697s)
--- PASS: TestNoKubernetes/serial/Start (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620590 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620590 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.06034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.082380437s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-620590
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-620590: (1.234192786s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620590 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620590 --driver=docker  --container-runtime=docker: (7.664093848s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-702527
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620590 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620590 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.07781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (61.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737833 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-737833 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m1.447679419s)
--- PASS: TestPause/serial/Start (61.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737833 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-737833 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.092599999s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (39.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-721822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-721822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (39.810590026s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (39.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-737833 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.47s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-737833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-737833 --output=json --layout=cluster: exit status 2 (309.786307ms)

                                                
                                                
-- stdout --
	{"Name":"pause-737833","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-737833","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-737833 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-737833 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.15s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-737833 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-737833 --alsologtostderr -v=5: (2.147900185s)
--- PASS: TestPause/serial/DeletePaused (2.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-721822 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [29f4dc6e-4aba-4bfd-82ad-43471ad4a853] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [29f4dc6e-4aba-4bfd-82ad-43471ad4a853] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003369136s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-721822 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (18.658161869s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-737833
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-737833: exit status 1 (20.414769ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-737833: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-721822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-721822 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-721822 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-721822 --alsologtostderr -v=3: (10.728259955s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-721822 -n old-k8s-version-721822
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-721822 -n old-k8s-version-721822: exit status 7 (68.094564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-721822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-721822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-721822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (50.308198794s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-721822 -n old-k8s-version-721822
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-351114 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0926 23:30:15.620610 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.627131 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.638521 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.659965 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.701409 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.782763 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:15.944285 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:16.265695 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:16.907947 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:18.189763 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:20.751175 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:25.873173 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:36.115377 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-351114 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (47.993823297s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-351114 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [11ea65ed-d1af-4419-b4cd-0977f00683e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [11ea65ed-d1af-4419-b4cd-0977f00683e1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003945037s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-351114 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n2gtn" [52338c6f-4134-4f4e-9aad-4b0814613863] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00284685s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n2gtn" [52338c6f-4134-4f4e-9aad-4b0814613863] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004010818s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-721822 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-351114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-351114 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-351114 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-351114 --alsologtostderr -v=3: (12.170251991s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-721822 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-721822 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-721822 -n old-k8s-version-721822
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-721822 -n old-k8s-version-721822: exit status 2 (392.071055ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-721822 -n old-k8s-version-721822
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-721822 -n old-k8s-version-721822: exit status 2 (325.130048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-721822 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-721822 -n old-k8s-version-721822
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-721822 -n old-k8s-version-721822
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-695625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-695625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (38.191491883s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-198546 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-198546 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (31.725038662s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351114 -n no-preload-351114
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351114 -n no-preload-351114: exit status 7 (83.580649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-351114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-351114 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0926 23:31:24.381071 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:31:37.558409 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-351114 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (49.36828652s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351114 -n no-preload-351114
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-198546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-198546 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-198546 --alsologtostderr -v=3: (10.121262269s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-695625 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [654a18d6-3f00-4e12-bfdd-e73e20e3dda7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [654a18d6-3f00-4e12-bfdd-e73e20e3dda7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004061186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-695625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198546 -n newest-cni-198546
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198546 -n newest-cni-198546: exit status 7 (70.168525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-198546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-198546 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-198546 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (12.646501281s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198546 -n newest-cni-198546
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-695625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-695625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-695625 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-695625 --alsologtostderr -v=3: (10.837045054s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7rq7j" [063f697f-7d48-41e8-8f3b-c6e72f5f7cad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004553449s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-198546 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-198546 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198546 -n newest-cni-198546
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198546 -n newest-cni-198546: exit status 2 (301.634674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198546 -n newest-cni-198546
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198546 -n newest-cni-198546: exit status 2 (304.096973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-198546 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198546 -n newest-cni-198546
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198546 -n newest-cni-198546
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625: exit status 7 (71.386468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-695625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-695625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-695625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (53.600146066s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7rq7j" [063f697f-7d48-41e8-8f3b-c6e72f5f7cad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003702784s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-351114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072592 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072592 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m6.966228049s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-351114 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-351114 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351114 -n no-preload-351114
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351114 -n no-preload-351114: exit status 2 (376.025553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351114 -n no-preload-351114
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351114 -n no-preload-351114: exit status 2 (314.29364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-351114 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351114 -n no-preload-351114
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351114 -n no-preload-351114
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0926 23:32:38.439656 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/functional-618103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (44.34629537s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0926 23:32:59.480711 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (44.426524565s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2ghmc" [7f41882c-a895-4b65-9286-6eb267b9afaf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004238603s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-733869 "pgrep -a kubelet"
I0926 23:33:11.889838 1399974 config.go:182] Loaded profile config "auto-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fhf5q" [3762aac9-b846-4d84-8c46-70e0c3ebb0b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fhf5q" [3762aac9-b846-4d84-8c46-70e0c3ebb0b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004628094s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2ghmc" [7f41882c-a895-4b65-9286-6eb267b9afaf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00395071s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-695625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-695625 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-695625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625: exit status 2 (351.064028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625: exit status 2 (355.504247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-695625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-695625 -n default-k8s-diff-port-695625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-072592 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99198b2b-8362-476c-ba9e-fa8f3664d1b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99198b2b-8362-476c-ba9e-fa8f3664d1b0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00444592s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-072592 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vt7hb" [7168d93a-129c-47df-9104-9f6743d5ab3d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004730903s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (51.230707038s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-733869 "pgrep -a kubelet"
I0926 23:33:31.846188 1399974 config.go:182] Loaded profile config "kindnet-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4mbz9" [4b203eea-53b5-4977-b48e-d5a5ddce916c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4mbz9" [4b203eea-53b5-4977-b48e-d5a5ddce916c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005934936s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-072592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-072592 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-072592 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-072592 --alsologtostderr -v=3: (11.736533143s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (52.756719542s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072592 -n embed-certs-072592
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072592 -n embed-certs-072592: exit status 7 (100.985337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-072592 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072592 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072592 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (55.043738484s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072592 -n embed-certs-072592
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (68.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m8.144581559s)
--- PASS: TestNetworkPlugins/group/false/Start (68.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mpcdp" [cd975b63-0055-431c-ac3c-4d8b23f86423] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004185712s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-733869 "pgrep -a kubelet"
I0926 23:34:23.800503 1399974 config.go:182] Loaded profile config "calico-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ckvf9" [c5cd6ebc-7f94-4a03-b820-4385e6972903] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ckvf9" [c5cd6ebc-7f94-4a03-b820-4385e6972903] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004234335s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-733869 "pgrep -a kubelet"
I0926 23:34:33.422332 1399974 config.go:182] Loaded profile config "custom-flannel-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nrlzd" [3fca5c23-8908-4397-87de-f4673dfea5df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nrlzd" [3fca5c23-8908-4397-87de-f4673dfea5df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003446709s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-29gfh" [9f62c939-3fd1-4008-b459-628fc24a042a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003335028s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-29gfh" [9f62c939-3fd1-4008-b459-628fc24a042a] Running
E0926 23:34:49.619525 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.625918 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.637362 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.658718 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.700888 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.783073 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:49.944804 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:50.266519 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:34:50.908598 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004293198s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-072592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0926 23:34:52.190317 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072592 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-072592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072592 -n embed-certs-072592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072592 -n embed-certs-072592: exit status 2 (340.495952ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072592 -n embed-certs-072592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072592 -n embed-certs-072592: exit status 2 (334.122042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-072592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072592 -n embed-certs-072592
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072592 -n embed-certs-072592
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0926 23:34:54.752066 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m7.361113143s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0926 23:34:59.873447 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.133001215s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0926 23:35:10.115397 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/old-k8s-version-721822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.719190618s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-733869 "pgrep -a kubelet"
I0926 23:35:12.500993 1399974 config.go:182] Loaded profile config "false-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tgg99" [bdf617f3-53f0-4be0-971a-ddcb2678f505] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0926 23:35:15.620053 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tgg99" [bdf617f3-53f0-4be0-971a-ddcb2678f505] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003394024s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (66.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0926 23:35:43.323125 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/skaffold-407544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-733869 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m6.477774632s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (66.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mlhz4" [1c82d409-2c95-4e6a-aa3b-4eb53df0c422] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004107905s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-733869 "pgrep -a kubelet"
I0926 23:35:53.263203 1399974 config.go:182] Loaded profile config "flannel-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9vttg" [7f54b269-55e2-4f9b-a740-144b10fa986c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9vttg" [7f54b269-55e2-4f9b-a740-144b10fa986c] Running
E0926 23:35:58.442768 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.449584 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.460950 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.482327 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.523829 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.605842 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:58.767412 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:59.089074 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:35:59.730904 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:01.012606 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003720101s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-733869 "pgrep -a kubelet"
I0926 23:36:02.396419 1399974 config.go:182] Loaded profile config "enable-default-cni-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xf4fk" [517c3064-a961-416f-9d03-be5fed8dc7eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xf4fk" [517c3064-a961-416f-9d03-be5fed8dc7eb] Running
E0926 23:36:07.447596 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/addons-619347/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:08.696727 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004732336s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-733869 exec deployment/netcat -- nslookup kubernetes.default
E0926 23:36:03.574720 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-733869 "pgrep -a kubelet"
I0926 23:36:12.961338 1399974 config.go:182] Loaded profile config "bridge-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vnk69" [edc4bb1e-a314-4690-9708-fde4ebb1af2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vnk69" [edc4bb1e-a314-4690-9708-fde4ebb1af2f] Running
E0926 23:36:18.938216 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/no-preload-351114/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00492949s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-733869 "pgrep -a kubelet"
I0926 23:36:49.387797 1399974 config.go:182] Loaded profile config "kubenet-733869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-733869 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2qlwf" [025e640d-10be-4824-9b83-1ec1f00eb6af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2qlwf" [025e640d-10be-4824-9b83-1ec1f00eb6af] Running
E0926 23:36:54.649026 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.655387 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.666723 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.688040 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.729443 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.810891 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:54.972460 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:55.294177 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:55.935464 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:57.217269 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003586715s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-733869 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0926 23:36:59.778997 1399974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/default-k8s-diff-port-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-733869 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    

Test skip (22/346)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-690411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-690411
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-733869 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-733869" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-693611
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:27:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-920645
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-1396392/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:28:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-737833
contexts:
- context:
cluster: cert-expiration-693611
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-693611
name: cert-expiration-693611
- context:
cluster: kubernetes-upgrade-920645
user: kubernetes-upgrade-920645
name: kubernetes-upgrade-920645
- context:
cluster: pause-737833
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:28:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-737833
name: pause-737833
current-context: ""
kind: Config
users:
- name: cert-expiration-693611
user:
client-certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/cert-expiration-693611/client.crt
client-key: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/cert-expiration-693611/client.key
- name: kubernetes-upgrade-920645
user:
client-certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/kubernetes-upgrade-920645/client.crt
client-key: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/kubernetes-upgrade-920645/client.key
- name: pause-737833
user:
client-certificate: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/pause-737833/client.crt
client-key: /home/jenkins/minikube-integration/21642-1396392/.minikube/profiles/pause-737833/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-733869

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-733869" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733869"

                                                
                                                
----------------------- debugLogs end: cilium-733869 [took: 3.08865976s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-733869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-733869
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
Copied to clipboard